Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636780723 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 05:18:45.552: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.557: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 05:18:45.584: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:18:45.652: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:18:45.652: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:18:45.652: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:18:45.652: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:18:45.652: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 05:18:45.670: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 05:18:45.670: INFO: e2e test version: v1.21.5 Nov 13 05:18:45.671: INFO: kube-apiserver version: v1.21.1 Nov 13 05:18:45.672: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.678: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 13 05:18:45.673: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.695: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 05:18:45.679: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.699: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Nov 13 05:18:45.685: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.707: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 05:18:45.688: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.709: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 05:18:45.704: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.726: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 05:18:45.706: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.728: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 13 05:18:45.709: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.730: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 05:18:45.713: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.741: INFO: Cluster IP family: ipv4 Nov 13 05:18:45.714: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:45.741: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.722206 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.722: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.725: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:18:51.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8278 PodName:hostexec-node2-vpbrf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:51.759: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:52.030: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:18:52.030: INFO: exec node2: stdout: "0\n" Nov 13 05:18:52.030: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:18:52.030: INFO: exec node2: exit code: 0 Nov 13 05:18:52.030: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:18:52.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8278" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.350 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file W1113 05:18:45.783181 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.783: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.785: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:18:45.813: INFO: The status of Pod test-hostpath-type-587jf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:47.817: INFO: The status of Pod test-hostpath-type-587jf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:49.817: INFO: The status of Pod test-hostpath-type-587jf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:51.816: INFO: The status of Pod test-hostpath-type-587jf is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:18:57.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-9837" for this suite. • [SLOW TEST:12.129 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":1,"skipped":14,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath W1113 05:18:45.878336 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.878: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.880: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Nov 13 05:18:45.895: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1103" to be "Succeeded or Failed" Nov 13 05:18:45.897: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604687ms Nov 13 05:18:47.903: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008572509s Nov 13 05:18:49.908: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013226415s Nov 13 05:18:51.915: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020682731s Nov 13 05:18:53.919: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024680789s Nov 13 05:18:55.923: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028720031s Nov 13 05:18:57.928: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032745616s STEP: Saw pod success Nov 13 05:18:57.928: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:18:57.930: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 13 05:18:58.035: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:18:58.037: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:18:58.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1103" for this suite. • [SLOW TEST:12.190 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":56,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.886454 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.886: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.888: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea" Nov 13 05:18:51.915: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea && dd if=/dev/zero of=/tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea/file] Namespace:persistent-local-volumes-test-9012 PodName:hostexec-node2-xlhbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:51.915: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:18:52.705: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9012 PodName:hostexec-node2-xlhbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:52.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:18:52.954: INFO: Creating a PV followed by a PVC Nov 13 05:18:52.960: INFO: Waiting for PV local-pvn9d48 to bind to PVC pvc-qlpm7 Nov 13 05:18:52.960: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qlpm7] to have phase Bound Nov 13 05:18:52.963: INFO: PersistentVolumeClaim pvc-qlpm7 found but phase is Pending instead of Bound. Nov 13 05:18:54.966: INFO: PersistentVolumeClaim pvc-qlpm7 found but phase is Pending instead of Bound. Nov 13 05:18:56.972: INFO: PersistentVolumeClaim pvc-qlpm7 found and phase=Bound (4.011967481s) Nov 13 05:18:56.972: INFO: Waiting up to 3m0s for PersistentVolume local-pvn9d48 to have phase Bound Nov 13 05:18:56.975: INFO: PersistentVolume local-pvn9d48 found and phase=Bound (2.188136ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:18:56.980: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:18:56.981: INFO: Deleting PersistentVolumeClaim "pvc-qlpm7" Nov 13 05:18:56.987: INFO: Deleting PersistentVolume "local-pvn9d48" Nov 13 05:18:56.992: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9012 PodName:hostexec-node2-xlhbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea/file Nov 13 05:18:58.189: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9012 PodName:hostexec-node2-xlhbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:58.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea Nov 13 05:18:58.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-86f27920-8cac-40b9-a3ee-9c32044c25ea] Namespace:persistent-local-volumes-test-9012 PodName:hostexec-node2-xlhbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:58.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:18:58.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9012" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [12.854 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory W1113 05:18:45.780703 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.780: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.783: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:18:45.808: INFO: The status of Pod test-hostpath-type-lbfnc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:47.812: INFO: The status of Pod test-hostpath-type-lbfnc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:49.811: INFO: The status of Pod test-hostpath-type-lbfnc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:18:51.813: INFO: The status of Pod test-hostpath-type-lbfnc is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:18:59.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9028" for this suite. • [SLOW TEST:14.123 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":1,"skipped":7,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:57.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:19:03.941: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-kkw7s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:03.942: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:04.107: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:19:04.107: INFO: exec node1: stdout: "0\n" Nov 13 05:19:04.107: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:19:04.107: INFO: exec node1: exit code: 0 Nov 13 05:19:04.107: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:04.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6048" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.220 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:04.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:19:04.190: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:04.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1498" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.901603 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.901: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.903: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:18:49.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8d49b009-870c-490d-a219-f87ac5afb147] Namespace:persistent-local-volumes-test-3649 PodName:hostexec-node1-tzwrw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:49.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:18:50.046: INFO: Creating a PV followed by a PVC Nov 13 05:18:50.053: INFO: Waiting for PV local-pvrjdpq to bind to PVC pvc-jwcd8 Nov 13 05:18:50.053: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jwcd8] to have phase Bound Nov 13 05:18:50.055: INFO: PersistentVolumeClaim pvc-jwcd8 found but phase is Pending instead of Bound. Nov 13 05:18:52.059: INFO: PersistentVolumeClaim pvc-jwcd8 found but phase is Pending instead of Bound. Nov 13 05:18:54.062: INFO: PersistentVolumeClaim pvc-jwcd8 found but phase is Pending instead of Bound. Nov 13 05:18:56.067: INFO: PersistentVolumeClaim pvc-jwcd8 found but phase is Pending instead of Bound. Nov 13 05:18:58.070: INFO: PersistentVolumeClaim pvc-jwcd8 found and phase=Bound (8.016884161s) Nov 13 05:18:58.070: INFO: Waiting up to 3m0s for PersistentVolume local-pvrjdpq to have phase Bound Nov 13 05:18:58.073: INFO: PersistentVolume local-pvrjdpq found and phase=Bound (2.589801ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:19:10.100: INFO: pod "pod-f2b64f0a-124d-4419-bcfb-ce0656a9b1ef" created on Node "node1" STEP: Writing in pod1 Nov 13 05:19:10.100: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3649 PodName:pod-f2b64f0a-124d-4419-bcfb-ce0656a9b1ef ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:10.100: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:10.190: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:19:10.190: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3649 PodName:pod-f2b64f0a-124d-4419-bcfb-ce0656a9b1ef ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:10.190: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:10.265: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-f2b64f0a-124d-4419-bcfb-ce0656a9b1ef in namespace persistent-local-volumes-test-3649 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:10.271: INFO: Deleting PersistentVolumeClaim "pvc-jwcd8" Nov 13 05:19:10.274: INFO: Deleting PersistentVolume "local-pvrjdpq" STEP: Removing the test directory Nov 13 05:19:10.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d49b009-870c-490d-a219-f87ac5afb147] Namespace:persistent-local-volumes-test-3649 PodName:hostexec-node1-tzwrw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:10.278: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:10.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3649" for this suite. • [SLOW TEST:24.574 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":71,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:52.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:18:54.234: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128 && mount --bind /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128 /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-node1-ps5g2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:54.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:18:54.323: INFO: Creating a PV followed by a PVC Nov 13 05:18:54.331: INFO: Waiting for PV local-pv4jhtc to bind to PVC pvc-6w29b Nov 13 05:18:54.331: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6w29b] to have phase Bound Nov 13 05:18:54.334: INFO: PersistentVolumeClaim pvc-6w29b found but phase is Pending instead of Bound. Nov 13 05:18:56.339: INFO: PersistentVolumeClaim pvc-6w29b found but phase is Pending instead of Bound. Nov 13 05:18:58.343: INFO: PersistentVolumeClaim pvc-6w29b found and phase=Bound (4.011765082s) Nov 13 05:18:58.343: INFO: Waiting up to 3m0s for PersistentVolume local-pv4jhtc to have phase Bound Nov 13 05:18:58.345: INFO: PersistentVolume local-pv4jhtc found and phase=Bound (2.073165ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:19:12.371: INFO: pod "pod-d88c1e72-05fa-44be-8fd5-fdbce05ac75b" created on Node "node1" STEP: Writing in pod1 Nov 13 05:19:12.371: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6428 PodName:pod-d88c1e72-05fa-44be-8fd5-fdbce05ac75b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:12.371: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:12.658: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:19:12.658: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6428 PodName:pod-d88c1e72-05fa-44be-8fd5-fdbce05ac75b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:12.658: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:12.880: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:19:12.880: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6428 PodName:pod-d88c1e72-05fa-44be-8fd5-fdbce05ac75b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:12.880: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:13.032: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d88c1e72-05fa-44be-8fd5-fdbce05ac75b in namespace persistent-local-volumes-test-6428 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:13.037: INFO: Deleting PersistentVolumeClaim "pvc-6w29b" Nov 13 05:19:13.041: INFO: Deleting PersistentVolume "local-pv4jhtc" STEP: Removing the test directory Nov 13 05:19:13.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128 && rm -r /tmp/local-volume-test-cf6c922b-6ae1-4dee-9f8f-803a8d15b128] Namespace:persistent-local-volumes-test-6428 PodName:hostexec-node1-ps5g2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:13.045: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:13.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6428" for this suite. • [SLOW TEST:21.605 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":65,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:13.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:19:13.835: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:13.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-1375" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:13.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:19:13.872: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:13.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2722" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:19:13.880: INFO: AfterEach: Cleaning up test resources Nov 13 05:19:13.880: INFO: pvc is nil Nov 13 05:19:13.880: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:59.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:19:13.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8777 PodName:hostexec-node2-8rplz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:13.932: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:14.022: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:19:14.022: INFO: exec node2: stdout: "0\n" Nov 13 05:19:14.022: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:19:14.022: INFO: exec node2: exit code: 0 Nov 13 05:19:14.022: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8777" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [14.147 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:14.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:19:14.080: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:14.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1831" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.773143 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.773: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.775: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:18:51.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend && mount --bind /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend && ln -s /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229] Namespace:persistent-local-volumes-test-4589 PodName:hostexec-node2-8fjwr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:51.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:18:52.312: INFO: Creating a PV followed by a PVC Nov 13 05:18:52.319: INFO: Waiting for PV local-pvp9mr6 to bind to PVC pvc-hv4dv Nov 13 05:18:52.319: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hv4dv] to have phase Bound Nov 13 05:18:52.322: INFO: PersistentVolumeClaim pvc-hv4dv found but phase is Pending instead of Bound. Nov 13 05:18:54.325: INFO: PersistentVolumeClaim pvc-hv4dv found and phase=Bound (2.006336043s) Nov 13 05:18:54.326: INFO: Waiting up to 3m0s for PersistentVolume local-pvp9mr6 to have phase Bound Nov 13 05:18:54.327: INFO: PersistentVolume local-pvp9mr6 found and phase=Bound (1.771445ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:19:00.354: INFO: pod "pod-227a8c5d-16f8-4329-8295-e40c62ad01d7" created on Node "node2" STEP: Writing in pod1 Nov 13 05:19:00.354: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4589 PodName:pod-227a8c5d-16f8-4329-8295-e40c62ad01d7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:00.354: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:00.703: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:19:00.703: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4589 PodName:pod-227a8c5d-16f8-4329-8295-e40c62ad01d7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:00.703: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:00.794: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-227a8c5d-16f8-4329-8295-e40c62ad01d7 in namespace persistent-local-volumes-test-4589 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:19:14.819: INFO: pod "pod-9eb697ee-dfeb-4772-be30-00c1151539a6" created on Node "node2" STEP: Reading in pod2 Nov 13 05:19:14.819: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4589 PodName:pod-9eb697ee-dfeb-4772-be30-00c1151539a6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:14.819: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:15.167: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-9eb697ee-dfeb-4772-be30-00c1151539a6 in namespace persistent-local-volumes-test-4589 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:15.172: INFO: Deleting PersistentVolumeClaim "pvc-hv4dv" Nov 13 05:19:15.176: INFO: Deleting PersistentVolume "local-pvp9mr6" STEP: Removing the test directory Nov 13 05:19:15.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229 && umount /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend && rm -r /tmp/local-volume-test-b1226227-5dec-4301-b48d-14dad926a229-backend] Namespace:persistent-local-volumes-test-4589 PodName:hostexec-node2-8fjwr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:15.181: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:15.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4589" for this suite. • [SLOW TEST:29.718 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:58.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:18:58.786: INFO: The status of Pod test-hostpath-type-bsjwg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:00.790: INFO: The status of Pod test-hostpath-type-bsjwg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:02.791: INFO: The status of Pod test-hostpath-type-bsjwg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:04.789: INFO: The status of Pod test-hostpath-type-bsjwg is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:16.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-1470" for this suite. • [SLOW TEST:18.103 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":1,"skipped":59,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:04.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:19:20.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2045 PodName:hostexec-node1-sllmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:20.321: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:20.495: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:19:20.495: INFO: exec node1: stdout: "0\n" Nov 13 05:19:20.495: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:19:20.495: INFO: exec node1: exit code: 0 Nov 13 05:19:20.495: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:20.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2045" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [16.226 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:10.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:19:10.553: INFO: The status of Pod test-hostpath-type-wm97p is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:12.556: INFO: The status of Pod test-hostpath-type-wm97p is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:14.556: INFO: The status of Pod test-hostpath-type-wm97p is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:16.562: INFO: The status of Pod test-hostpath-type-wm97p is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:18.559: INFO: The status of Pod test-hostpath-type-wm97p is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:20.557: INFO: The status of Pod test-hostpath-type-wm97p is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-9136" for this suite. • [SLOW TEST:14.126 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":2,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:24.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Nov 13 05:19:24.712: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:24.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-4189" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Nov 13 05:19:24.726: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.774050 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.774: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.776: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:18:51.814: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4552cade-d4d7-4315-9083-fc2e7b9965d6 && mount --bind /tmp/local-volume-test-4552cade-d4d7-4315-9083-fc2e7b9965d6 /tmp/local-volume-test-4552cade-d4d7-4315-9083-fc2e7b9965d6] Namespace:persistent-local-volumes-test-9450 PodName:hostexec-node1-pddrb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:51.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:18:51.914: INFO: Creating a PV followed by a PVC Nov 13 05:18:51.920: INFO: Waiting for PV local-pvvvvmm to bind to PVC pvc-zj5sf Nov 13 05:18:51.920: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zj5sf] to have phase Bound Nov 13 05:18:51.923: INFO: PersistentVolumeClaim pvc-zj5sf found but phase is Pending instead of Bound. Nov 13 05:18:53.926: INFO: PersistentVolumeClaim pvc-zj5sf found but phase is Pending instead of Bound. Nov 13 05:18:55.929: INFO: PersistentVolumeClaim pvc-zj5sf found but phase is Pending instead of Bound. Nov 13 05:18:57.933: INFO: PersistentVolumeClaim pvc-zj5sf found and phase=Bound (6.01271579s) Nov 13 05:18:57.933: INFO: Waiting up to 3m0s for PersistentVolume local-pvvvvmm to have phase Bound Nov 13 05:18:57.935: INFO: PersistentVolume local-pvvvvmm found and phase=Bound (2.174178ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:19:07.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9450 exec pod-32d89970-e244-46b2-b71b-ad74fdb6bd77 --namespace=persistent-local-volumes-test-9450 -- stat -c %g /mnt/volume1' Nov 13 05:19:08.325: INFO: stderr: "" Nov 13 05:19:08.325: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:19:24.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9450 exec pod-d9780f0c-9bf2-4436-85b2-0a71efdaa67d --namespace=persistent-local-volumes-test-9450 -- stat -c %g /mnt/volume1' Nov 13 05:19:24.640: INFO: stderr: "" Nov 13 05:19:24.640: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-32d89970-e244-46b2-b71b-ad74fdb6bd77 in namespace persistent-local-volumes-test-9450 STEP: Deleting second pod STEP: Deleting pod pod-d9780f0c-9bf2-4436-85b2-0a71efdaa67d in namespace persistent-local-volumes-test-9450 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:24.650: INFO: Deleting PersistentVolumeClaim "pvc-zj5sf" Nov 13 05:19:24.654: INFO: Deleting PersistentVolume "local-pvvvvmm" STEP: Removing the test directory Nov 13 05:19:24.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4552cade-d4d7-4315-9083-fc2e7b9965d6 && rm -r /tmp/local-volume-test-4552cade-d4d7-4315-9083-fc2e7b9965d6] Namespace:persistent-local-volumes-test-9450 PodName:hostexec-node1-pddrb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:24.658: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:24.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9450" for this suite. • [SLOW TEST:39.096 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":1,"skipped":3,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:58.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:19:06.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8dd8ccc7-db4e-449a-9749-bae7f48b4729 && mount --bind /tmp/local-volume-test-8dd8ccc7-db4e-449a-9749-bae7f48b4729 /tmp/local-volume-test-8dd8ccc7-db4e-449a-9749-bae7f48b4729] Namespace:persistent-local-volumes-test-5190 PodName:hostexec-node1-g676v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:06.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:06.193: INFO: Creating a PV followed by a PVC Nov 13 05:19:06.201: INFO: Waiting for PV local-pvkwrzx to bind to PVC pvc-969tx Nov 13 05:19:06.201: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-969tx] to have phase Bound Nov 13 05:19:06.203: INFO: PersistentVolumeClaim pvc-969tx found but phase is Pending instead of Bound. Nov 13 05:19:08.207: INFO: PersistentVolumeClaim pvc-969tx found but phase is Pending instead of Bound. Nov 13 05:19:10.212: INFO: PersistentVolumeClaim pvc-969tx found but phase is Pending instead of Bound. Nov 13 05:19:12.216: INFO: PersistentVolumeClaim pvc-969tx found and phase=Bound (6.015611263s) Nov 13 05:19:12.216: INFO: Waiting up to 3m0s for PersistentVolume local-pvkwrzx to have phase Bound Nov 13 05:19:12.219: INFO: PersistentVolume local-pvkwrzx found and phase=Bound (2.560131ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:19:28.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5190 exec pod-3ab05153-e448-4a6b-8adf-50bbb2a9c440 --namespace=persistent-local-volumes-test-5190 -- stat -c %g /mnt/volume1' Nov 13 05:19:28.561: INFO: stderr: "" Nov 13 05:19:28.561: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-3ab05153-e448-4a6b-8adf-50bbb2a9c440 in namespace persistent-local-volumes-test-5190 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:28.567: INFO: Deleting PersistentVolumeClaim "pvc-969tx" Nov 13 05:19:28.571: INFO: Deleting PersistentVolume "local-pvkwrzx" STEP: Removing the test directory Nov 13 05:19:28.575: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-8dd8ccc7-db4e-449a-9749-bae7f48b4729 && rm -r /tmp/local-volume-test-8dd8ccc7-db4e-449a-9749-bae7f48b4729] Namespace:persistent-local-volumes-test-5190 PodName:hostexec-node1-g676v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:28.575: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:28.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5190" for this suite. • [SLOW TEST:30.846 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":57,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:24.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:19:24.821: INFO: The status of Pod test-hostpath-type-gnb85 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:26.825: INFO: The status of Pod test-hostpath-type-gnb85 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:28.826: INFO: The status of Pod test-hostpath-type-gnb85 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:30.825: INFO: The status of Pod test-hostpath-type-gnb85 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:32.826: INFO: The status of Pod test-hostpath-type-gnb85 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:34.824: INFO: The status of Pod test-hostpath-type-gnb85 is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:19:34.826: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4803 PodName:test-hostpath-type-gnb85 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:34.826: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:36.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4803" for this suite. • [SLOW TEST:12.158 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":3,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:37.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:19:37.129: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:19:37.134: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:37.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7546" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:14.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 Nov 13 05:19:14.141: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Nov 13 05:19:36.280: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-7205 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-7205-externalrj96q,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 13 05:19:36.285: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4vvd7] to have phase Bound Nov 13 05:19:36.287: INFO: PersistentVolumeClaim pvc-4vvd7 found but phase is Pending instead of Bound. Nov 13 05:19:38.294: INFO: PersistentVolumeClaim pvc-4vvd7 found and phase=Bound (2.008983465s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-7205"/"pvc-4vvd7" STEP: deleting the claim's PV "pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a" Nov 13 05:19:38.303: INFO: Waiting up to 20m0s for PersistentVolume pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a to get deleted Nov 13 05:19:38.305: INFO: PersistentVolume pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a found and phase=Bound (2.10424ms) Nov 13 05:19:43.313: INFO: PersistentVolume pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a was removed Nov 13 05:19:43.313: INFO: deleting claim "volume-provisioning-7205"/"pvc-4vvd7" Nov 13 05:19:43.315: INFO: deleting storage class volume-provisioning-7205-externalrj96q STEP: Deleting pod external-provisioner-572kb in namespace volume-provisioning-7205 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:43.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-7205" for this suite. • [SLOW TEST:29.235 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:626 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:28.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220" Nov 13 05:19:40.975: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220 && dd if=/dev/zero of=/tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220/file] Namespace:persistent-local-volumes-test-6667 PodName:hostexec-node2-jft7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:40.975: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:41.104: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6667 PodName:hostexec-node2-jft7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:41.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:41.248: INFO: Creating a PV followed by a PVC Nov 13 05:19:41.255: INFO: Waiting for PV local-pvlqbfg to bind to PVC pvc-hpxl5 Nov 13 05:19:41.255: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hpxl5] to have phase Bound Nov 13 05:19:41.258: INFO: PersistentVolumeClaim pvc-hpxl5 found but phase is Pending instead of Bound. Nov 13 05:19:43.262: INFO: PersistentVolumeClaim pvc-hpxl5 found and phase=Bound (2.00650173s) Nov 13 05:19:43.262: INFO: Waiting up to 3m0s for PersistentVolume local-pvlqbfg to have phase Bound Nov 13 05:19:43.264: INFO: PersistentVolume local-pvlqbfg found and phase=Bound (1.918908ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:19:43.268: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:43.270: INFO: Deleting PersistentVolumeClaim "pvc-hpxl5" Nov 13 05:19:43.273: INFO: Deleting PersistentVolume "local-pvlqbfg" Nov 13 05:19:43.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6667 PodName:hostexec-node2-jft7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:43.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220/file Nov 13 05:19:43.378: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6667 PodName:hostexec-node2-jft7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:43.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220 Nov 13 05:19:43.525: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a39e092b-3dee-4a58-ba4d-64d79b8de220] Namespace:persistent-local-volumes-test-6667 PodName:hostexec-node2-jft7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:43.525: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:43.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6667" for this suite. S [SKIPPING] [14.910 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:44.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:19:44.046: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:44.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-981" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:37.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:19:37.458: INFO: The status of Pod test-hostpath-type-qfwsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:39.461: INFO: The status of Pod test-hostpath-type-qfwsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:41.462: INFO: The status of Pod test-hostpath-type-qfwsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:43.467: INFO: The status of Pod test-hostpath-type-qfwsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:45.461: INFO: The status of Pod test-hostpath-type-qfwsn is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:51.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-190" for this suite. • [SLOW TEST:14.076 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":4,"skipped":334,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:13.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951" Nov 13 05:19:25.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951 && dd if=/dev/zero of=/tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951/file] Namespace:persistent-local-volumes-test-9351 PodName:hostexec-node1-g6nk6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:25.940: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:26.108: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9351 PodName:hostexec-node1-g6nk6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:26.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:26.871: INFO: Creating a PV followed by a PVC Nov 13 05:19:26.877: INFO: Waiting for PV local-pv62rbc to bind to PVC pvc-xg6fd Nov 13 05:19:26.877: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xg6fd] to have phase Bound Nov 13 05:19:26.879: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:28.882: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:30.886: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:32.889: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:34.892: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:36.894: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:38.898: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:40.901: INFO: PersistentVolumeClaim pvc-xg6fd found but phase is Pending instead of Bound. Nov 13 05:19:42.904: INFO: PersistentVolumeClaim pvc-xg6fd found and phase=Bound (16.027691491s) Nov 13 05:19:42.904: INFO: Waiting up to 3m0s for PersistentVolume local-pv62rbc to have phase Bound Nov 13 05:19:42.907: INFO: PersistentVolume local-pv62rbc found and phase=Bound (2.996374ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:19:52.935: INFO: pod "pod-fc8e0e29-7c45-4d26-8aa4-4a0f5dd9e02c" created on Node "node1" STEP: Writing in pod1 Nov 13 05:19:52.935: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9351 PodName:pod-fc8e0e29-7c45-4d26-8aa4-4a0f5dd9e02c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:52.935: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:53.118: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:19:53.118: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9351 PodName:pod-fc8e0e29-7c45-4d26-8aa4-4a0f5dd9e02c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:19:53.118: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:53.202: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fc8e0e29-7c45-4d26-8aa4-4a0f5dd9e02c in namespace persistent-local-volumes-test-9351 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:53.208: INFO: Deleting PersistentVolumeClaim "pvc-xg6fd" Nov 13 05:19:53.212: INFO: Deleting PersistentVolume "local-pv62rbc" Nov 13 05:19:53.216: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9351 PodName:hostexec-node1-g6nk6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:53.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951/file Nov 13 05:19:53.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9351 PodName:hostexec-node1-g6nk6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:53.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951 Nov 13 05:19:54.082: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54561e9d-de67-4cde-a1fe-274663f86951] Namespace:persistent-local-volumes-test-9351 PodName:hostexec-node1-g6nk6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:54.082: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:56.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9351" for this suite. • [SLOW TEST:42.457 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":83,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:56.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Nov 13 05:19:56.388: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:56.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8337" for this suite. S [SKIPPING] [0.035 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:517 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:43.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:19:43.457: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:45.461: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:47.461: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:49.460: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:51.460: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:53.462: INFO: The status of Pod test-hostpath-type-9lx9j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:19:55.461: INFO: The status of Pod test-hostpath-type-9lx9j is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7955" for this suite. • [SLOW TEST:14.080 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":3,"skipped":61,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:15.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:19:35.572: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend && mount --bind /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend && ln -s /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae] Namespace:persistent-local-volumes-test-1735 PodName:hostexec-node2-7g5vp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:35.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:35.780: INFO: Creating a PV followed by a PVC Nov 13 05:19:35.789: INFO: Waiting for PV local-pvtzdhd to bind to PVC pvc-vvvz6 Nov 13 05:19:35.789: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vvvz6] to have phase Bound Nov 13 05:19:35.791: INFO: PersistentVolumeClaim pvc-vvvz6 found but phase is Pending instead of Bound. Nov 13 05:19:37.796: INFO: PersistentVolumeClaim pvc-vvvz6 found but phase is Pending instead of Bound. Nov 13 05:19:39.800: INFO: PersistentVolumeClaim pvc-vvvz6 found but phase is Pending instead of Bound. Nov 13 05:19:41.803: INFO: PersistentVolumeClaim pvc-vvvz6 found and phase=Bound (6.01434686s) Nov 13 05:19:41.803: INFO: Waiting up to 3m0s for PersistentVolume local-pvtzdhd to have phase Bound Nov 13 05:19:41.806: INFO: PersistentVolume local-pvtzdhd found and phase=Bound (2.530172ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:19:57.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1735 exec pod-50b8245f-ee79-41c9-bcfd-6ddc402ec90f --namespace=persistent-local-volumes-test-1735 -- stat -c %g /mnt/volume1' Nov 13 05:19:58.522: INFO: stderr: "" Nov 13 05:19:58.522: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-50b8245f-ee79-41c9-bcfd-6ddc402ec90f in namespace persistent-local-volumes-test-1735 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:19:58.528: INFO: Deleting PersistentVolumeClaim "pvc-vvvz6" Nov 13 05:19:58.531: INFO: Deleting PersistentVolume "local-pvtzdhd" STEP: Removing the test directory Nov 13 05:19:58.536: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae && umount /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend && rm -r /tmp/local-volume-test-4255303c-fd66-4973-a5ef-ed52dec150ae-backend] Namespace:persistent-local-volumes-test-1735 PodName:hostexec-node2-7g5vp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:58.536: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:19:58.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1735" for this suite. • [SLOW TEST:43.124 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:57.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:19:57.567: INFO: Waiting up to 5m0s for pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2" in namespace "emptydir-7008" to be "Succeeded or Failed" Nov 13 05:19:57.570: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.901265ms Nov 13 05:19:59.574: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007023036s Nov 13 05:20:01.577: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010696903s Nov 13 05:20:03.582: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015054221s Nov 13 05:20:05.585: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018631987s Nov 13 05:20:07.591: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024565891s Nov 13 05:20:09.594: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027430381s Nov 13 05:20:11.598: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03115057s Nov 13 05:20:13.603: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035826249s Nov 13 05:20:15.605: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.038409012s STEP: Saw pod success Nov 13 05:20:15.605: INFO: Pod "pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2" satisfied condition "Succeeded or Failed" Nov 13 05:20:15.607: INFO: Trying to get logs from node node1 pod pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2 container test-container: STEP: delete the pod Nov 13 05:20:16.110: INFO: Waiting for pod pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2 to disappear Nov 13 05:20:16.113: INFO: Pod pod-85d77564-e6d6-4c62-a1e6-d620cb411ae2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:16.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7008" for this suite. • [SLOW TEST:18.588 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":4,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:16.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Nov 13 05:20:16.209: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:16.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4410" for this suite. S [SKIPPING] [0.033 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:691 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:693 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:16.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:19:26.907: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f6280630-1047-46ec-b3d7-d0a6e88d1511] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:26.907: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:28.388: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2567c104-e2b1-4caf-aa52-c564c21d0aa3] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:28.388: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:28.553: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6a0370f5-b03d-43e4-9aa4-9d8f11a52ef0] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:28.553: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:28.864: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e9f13f59-6cec-4af5-9928-7d8d2c3d132a] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:28.865: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:29.032: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1bbaf7f6-844d-48bd-b9a6-5692f961803a] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:29.032: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:29.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4f6af843-3e2a-47a2-8851-eb72c4e235e8] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:29.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:29.491: INFO: Creating a PV followed by a PVC Nov 13 05:19:29.499: INFO: Creating a PV followed by a PVC Nov 13 05:19:29.504: INFO: Creating a PV followed by a PVC Nov 13 05:19:29.510: INFO: Creating a PV followed by a PVC Nov 13 05:19:29.515: INFO: Creating a PV followed by a PVC Nov 13 05:19:29.520: INFO: Creating a PV followed by a PVC Nov 13 05:19:39.568: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:19:45.583: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c9601b91-7cc1-4420-88d7-3e1d8e3ae3e1] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:45.583: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:45.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d9a8edc9-b69d-46ed-87b5-39eede00ebd6] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:45.984: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:46.117: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5582c579-f347-47a8-a305-314d151e1713] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:46.117: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:46.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d1a24ef5-9867-48a9-b46f-0ebd280bbd8f] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:46.278: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:46.391: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-096b485f-7bc6-4cd2-a98c-1e46916af9f6] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:46.391: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:46.540: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e5ad0081-7b86-4ca5-bcaa-3f0e42e76d89] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:19:46.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:19:46.626: INFO: Creating a PV followed by a PVC Nov 13 05:19:46.633: INFO: Creating a PV followed by a PVC Nov 13 05:19:46.669: INFO: Creating a PV followed by a PVC Nov 13 05:19:46.675: INFO: Creating a PV followed by a PVC Nov 13 05:19:46.680: INFO: Creating a PV followed by a PVC Nov 13 05:19:46.687: INFO: Creating a PV followed by a PVC Nov 13 05:19:56.733: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Nov 13 05:19:56.741: INFO: Found 0 stateful pods, waiting for 3 Nov 13 05:20:06.766: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 13 05:20:16.744: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:20:16.744: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:20:16.744: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:20:16.748: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 13 05:20:16.750: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.332779ms) Nov 13 05:20:16.750: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 13 05:20:16.753: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.154453ms) Nov 13 05:20:16.753: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 13 05:20:16.755: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.057998ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:20:16.755: INFO: Deleting PersistentVolumeClaim "pvc-ff6fk" Nov 13 05:20:16.758: INFO: Deleting PersistentVolume "local-pv4d8p2" STEP: Cleaning up PVC and PV Nov 13 05:20:16.763: INFO: Deleting PersistentVolumeClaim "pvc-fg8q8" Nov 13 05:20:16.800: INFO: Deleting PersistentVolume "local-pvd5wf2" STEP: Cleaning up PVC and PV Nov 13 05:20:16.805: INFO: Deleting PersistentVolumeClaim "pvc-5q2dt" Nov 13 05:20:16.809: INFO: Deleting PersistentVolume "local-pvv2g9c" STEP: Cleaning up PVC and PV Nov 13 05:20:16.813: INFO: Deleting PersistentVolumeClaim "pvc-6qbzm" Nov 13 05:20:16.817: INFO: Deleting PersistentVolume "local-pvh6nbc" STEP: Cleaning up PVC and PV Nov 13 05:20:16.820: INFO: Deleting PersistentVolumeClaim "pvc-gn2nd" Nov 13 05:20:16.823: INFO: Deleting PersistentVolume "local-pv427lp" STEP: Cleaning up PVC and PV Nov 13 05:20:16.827: INFO: Deleting PersistentVolumeClaim "pvc-s2rtf" Nov 13 05:20:16.831: INFO: Deleting PersistentVolume "local-pvjfsp7" STEP: Removing the test directory Nov 13 05:20:16.835: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f6280630-1047-46ec-b3d7-d0a6e88d1511] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:16.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:16.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2567c104-e2b1-4caf-aa52-c564c21d0aa3] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:16.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.028: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6a0370f5-b03d-43e4-9aa4-9d8f11a52ef0] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e9f13f59-6cec-4af5-9928-7d8d2c3d132a] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.256: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1bbaf7f6-844d-48bd-b9a6-5692f961803a] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.349: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f6af843-3e2a-47a2-8851-eb72c4e235e8] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node1-mmt6z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:20:17.459: INFO: Deleting PersistentVolumeClaim "pvc-wrqfr" Nov 13 05:20:17.464: INFO: Deleting PersistentVolume "local-pvcwk5l" STEP: Cleaning up PVC and PV Nov 13 05:20:17.468: INFO: Deleting PersistentVolumeClaim "pvc-rtgz7" Nov 13 05:20:17.471: INFO: Deleting PersistentVolume "local-pv8p4n6" STEP: Cleaning up PVC and PV Nov 13 05:20:17.475: INFO: Deleting PersistentVolumeClaim "pvc-xnpxm" Nov 13 05:20:17.479: INFO: Deleting PersistentVolume "local-pv7wbg6" STEP: Cleaning up PVC and PV Nov 13 05:20:17.483: INFO: Deleting PersistentVolumeClaim "pvc-dbtzf" Nov 13 05:20:17.486: INFO: Deleting PersistentVolume "local-pvd26sd" STEP: Cleaning up PVC and PV Nov 13 05:20:17.490: INFO: Deleting PersistentVolumeClaim "pvc-n9z84" Nov 13 05:20:17.493: INFO: Deleting PersistentVolume "local-pvwpp54" STEP: Cleaning up PVC and PV Nov 13 05:20:17.497: INFO: Deleting PersistentVolumeClaim "pvc-djbhd" Nov 13 05:20:17.500: INFO: Deleting PersistentVolume "local-pv42z5l" STEP: Removing the test directory Nov 13 05:20:17.504: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9601b91-7cc1-4420-88d7-3e1d8e3ae3e1] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.687: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d9a8edc9-b69d-46ed-87b5-39eede00ebd6] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:17.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5582c579-f347-47a8-a305-314d151e1713] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:17.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:18.033: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d1a24ef5-9867-48a9-b46f-0ebd280bbd8f] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:18.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:18.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-096b485f-7bc6-4cd2-a98c-1e46916af9f6] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:18.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:18.617: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e5ad0081-7b86-4ca5-bcaa-3f0e42e76d89] Namespace:persistent-local-volumes-test-3282 PodName:hostexec-node2-vskmv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:18.617: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:18.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3282" for this suite. • [SLOW TEST:62.062 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":2,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:18.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:20:19.002: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:19.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9615" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:58.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:19:58.730: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:00.733: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:02.734: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:04.733: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:06.765: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:08.735: INFO: The status of Pod test-hostpath-type-rg5ql is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:10.733: INFO: The status of Pod test-hostpath-type-rg5ql is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:20:10.735: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1007 PodName:test-hostpath-type-rg5ql ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:10.735: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:23.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1007" for this suite. • [SLOW TEST:24.313 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":3,"skipped":50,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:16.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Nov 13 05:20:16.443: INFO: Waiting up to 5m0s for pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87" in namespace "projected-2366" to be "Succeeded or Failed" Nov 13 05:20:16.446: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535467ms Nov 13 05:20:18.450: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007446345s Nov 13 05:20:20.454: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011346496s Nov 13 05:20:22.461: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017973482s Nov 13 05:20:24.465: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022308895s Nov 13 05:20:26.472: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028968341s Nov 13 05:20:28.478: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034523879s Nov 13 05:20:30.481: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.037864173s STEP: Saw pod success Nov 13 05:20:30.481: INFO: Pod "metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87" satisfied condition "Succeeded or Failed" Nov 13 05:20:30.484: INFO: Trying to get logs from node node1 pod metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87 container client-container: STEP: delete the pod Nov 13 05:20:30.496: INFO: Waiting for pod metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87 to disappear Nov 13 05:20:30.498: INFO: Pod metadata-volume-ce3d812d-5f2d-42ec-ab79-ab53e59b5f87 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:30.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2366" for this suite. • [SLOW TEST:14.097 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":184,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:23.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:20:23.070: INFO: Waiting up to 5m0s for pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba" in namespace "emptydir-3548" to be "Succeeded or Failed" Nov 13 05:20:23.073: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330028ms Nov 13 05:20:25.076: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005313169s Nov 13 05:20:27.080: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009549092s Nov 13 05:20:29.084: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013332343s Nov 13 05:20:31.087: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016225854s Nov 13 05:20:33.090: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019998982s STEP: Saw pod success Nov 13 05:20:33.090: INFO: Pod "pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba" satisfied condition "Succeeded or Failed" Nov 13 05:20:33.093: INFO: Trying to get logs from node node1 pod pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba container test-container: STEP: delete the pod Nov 13 05:20:33.104: INFO: Waiting for pod pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba to disappear Nov 13 05:20:33.106: INFO: Pod pod-a0bc9fed-e0ee-4483-9f99-55b971f7cfba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:33.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3548" for this suite. • [SLOW TEST:10.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":4,"skipped":60,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:33.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:20:33.158: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:33.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4753" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:56.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d" Nov 13 05:20:12.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d" "/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d"] Namespace:persistent-local-volumes-test-5702 PodName:hostexec-node1-cftlq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:12.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:20:12.607: INFO: Creating a PV followed by a PVC Nov 13 05:20:12.615: INFO: Waiting for PV local-pvrnqwh to bind to PVC pvc-zncl6 Nov 13 05:20:12.615: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zncl6] to have phase Bound Nov 13 05:20:12.617: INFO: PersistentVolumeClaim pvc-zncl6 found but phase is Pending instead of Bound. Nov 13 05:20:14.621: INFO: PersistentVolumeClaim pvc-zncl6 found and phase=Bound (2.00655629s) Nov 13 05:20:14.621: INFO: Waiting up to 3m0s for PersistentVolume local-pvrnqwh to have phase Bound Nov 13 05:20:14.624: INFO: PersistentVolume local-pvrnqwh found and phase=Bound (2.574059ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:20:32.649: INFO: pod "pod-64cf656f-eb60-401f-a0e8-3f17ffa5e306" created on Node "node1" STEP: Writing in pod1 Nov 13 05:20:32.649: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5702 PodName:pod-64cf656f-eb60-401f-a0e8-3f17ffa5e306 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:32.649: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:32.744: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:20:32.744: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5702 PodName:pod-64cf656f-eb60-401f-a0e8-3f17ffa5e306 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:32.744: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:32.867: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:20:32.867: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5702 PodName:pod-64cf656f-eb60-401f-a0e8-3f17ffa5e306 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:32.867: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:33.020: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-64cf656f-eb60-401f-a0e8-3f17ffa5e306 in namespace persistent-local-volumes-test-5702 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:20:33.025: INFO: Deleting PersistentVolumeClaim "pvc-zncl6" Nov 13 05:20:33.029: INFO: Deleting PersistentVolume "local-pvrnqwh" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d" Nov 13 05:20:33.034: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d"] Namespace:persistent-local-volumes-test-5702 PodName:hostexec-node1-cftlq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:33.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:33.218: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3031f04e-8fac-47b3-b59c-ca16164a8d8d] Namespace:persistent-local-volumes-test-5702 PodName:hostexec-node1-cftlq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:33.218: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:33.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5702" for this suite. • [SLOW TEST:36.874 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":117,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:18:45.898183 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.898: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.900: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-7062 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:18:45.964: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-attacher Nov 13 05:18:45.968: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7062 Nov 13 05:18:45.968: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7062 Nov 13 05:18:45.971: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7062 Nov 13 05:18:45.974: INFO: creating *v1.Role: csi-mock-volumes-7062-9637/external-attacher-cfg-csi-mock-volumes-7062 Nov 13 05:18:45.976: INFO: creating *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-attacher-role-cfg Nov 13 05:18:45.979: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-provisioner Nov 13 05:18:45.981: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7062 Nov 13 05:18:45.981: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7062 Nov 13 05:18:45.984: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7062 Nov 13 05:18:45.987: INFO: creating *v1.Role: csi-mock-volumes-7062-9637/external-provisioner-cfg-csi-mock-volumes-7062 Nov 13 05:18:45.989: INFO: creating *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-provisioner-role-cfg Nov 13 05:18:45.991: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-resizer Nov 13 05:18:45.994: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7062 Nov 13 05:18:45.994: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7062 Nov 13 05:18:45.997: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7062 Nov 13 05:18:45.999: INFO: creating *v1.Role: csi-mock-volumes-7062-9637/external-resizer-cfg-csi-mock-volumes-7062 Nov 13 05:18:46.002: INFO: creating *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-resizer-role-cfg Nov 13 05:18:46.006: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-snapshotter Nov 13 05:18:46.009: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7062 Nov 13 05:18:46.009: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7062 Nov 13 05:18:46.011: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7062 Nov 13 05:18:46.014: INFO: creating *v1.Role: csi-mock-volumes-7062-9637/external-snapshotter-leaderelection-csi-mock-volumes-7062 Nov 13 05:18:46.019: INFO: creating *v1.RoleBinding: csi-mock-volumes-7062-9637/external-snapshotter-leaderelection Nov 13 05:18:46.027: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-mock Nov 13 05:18:46.032: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7062 Nov 13 05:18:46.036: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7062 Nov 13 05:18:46.041: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7062 Nov 13 05:18:46.044: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7062 Nov 13 05:18:46.046: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7062 Nov 13 05:18:46.049: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7062 Nov 13 05:18:46.051: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7062 Nov 13 05:18:46.054: INFO: creating *v1.StatefulSet: csi-mock-volumes-7062-9637/csi-mockplugin Nov 13 05:18:46.059: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7062 Nov 13 05:18:46.062: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7062" Nov 13 05:18:46.064: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7062 to register on node node2 I1113 05:19:06.144976 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:19:06.147232 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7062","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:19:06.231102 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:19:06.287596 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:19:06.346475 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7062","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:19:07.235913 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7062"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:19:12.464: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:19:12.469: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-pg5z8] to have phase Bound Nov 13 05:19:12.472: INFO: PersistentVolumeClaim pvc-pg5z8 found but phase is Pending instead of Bound. I1113 05:19:12.477119 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3"}}},"Error":"","FullError":null} Nov 13 05:19:14.475: INFO: PersistentVolumeClaim pvc-pg5z8 found and phase=Bound (2.005989993s) Nov 13 05:19:14.489: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-pg5z8] to have phase Bound Nov 13 05:19:14.491: INFO: PersistentVolumeClaim pvc-pg5z8 found and phase=Bound (2.306435ms) STEP: Waiting for expected CSI calls I1113 05:19:15.957058 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:19:16.083315 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","storage.kubernetes.io/csiProvisionerIdentity":"1636780746340-8081-csi-mock-csi-mock-volumes-7062"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:19:16.718198 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:19:16.720332 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","storage.kubernetes.io/csiProvisionerIdentity":"1636780746340-8081-csi-mock-csi-mock-volumes-7062"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:19:17.737646 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:19:17.739851 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","storage.kubernetes.io/csiProvisionerIdentity":"1636780746340-8081-csi-mock-csi-mock-volumes-7062"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:19:19.786864 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:19:19.819: INFO: >>> kubeConfig: /root/.kube/config I1113 05:19:19.916445 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","storage.kubernetes.io/csiProvisionerIdentity":"1636780746340-8081-csi-mock-csi-mock-volumes-7062"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running I1113 05:19:20.819191 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:19:20.833: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:21.036: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:19:21.158: INFO: >>> kubeConfig: /root/.kube/config I1113 05:19:21.343513 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount","target_path":"/var/lib/kubelet/pods/d70767ab-3549-4e0c-bd6c-6efb91c8b48c/volumes/kubernetes.io~csi/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3","storage.kubernetes.io/csiProvisionerIdentity":"1636780746340-8081-csi-mock-csi-mock-volumes-7062"}},"Response":{},"Error":"","FullError":null} I1113 05:19:23.398251 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:19:23.400725 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/d70767ab-3549-4e0c-bd6c-6efb91c8b48c/volumes/kubernetes.io~csi/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} STEP: Deleting the previously created pod Nov 13 05:19:36.499: INFO: Deleting pod "pvc-volume-tester-7qlnb" in namespace "csi-mock-volumes-7062" Nov 13 05:19:36.504: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7qlnb" to be fully deleted Nov 13 05:19:38.557: INFO: >>> kubeConfig: /root/.kube/config I1113 05:19:38.655017 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d70767ab-3549-4e0c-bd6c-6efb91c8b48c/volumes/kubernetes.io~csi/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/mount"},"Response":{},"Error":"","FullError":null} I1113 05:19:38.757998 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:19:38.759897 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-7qlnb Nov 13 05:19:45.509: INFO: Deleting pod "pvc-volume-tester-7qlnb" in namespace "csi-mock-volumes-7062" STEP: Deleting claim pvc-pg5z8 Nov 13 05:19:45.517: INFO: Waiting up to 2m0s for PersistentVolume pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3 to get deleted Nov 13 05:19:45.519: INFO: PersistentVolume pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3 found and phase=Bound (1.940441ms) I1113 05:19:45.529133 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:19:47.522: INFO: PersistentVolume pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3 was removed STEP: Deleting storageclass csi-mock-volumes-7062-scdb2kj STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7062 STEP: Waiting for namespaces [csi-mock-volumes-7062] to vanish STEP: uninstalling csi mock driver Nov 13 05:19:53.565: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-attacher Nov 13 05:19:53.568: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7062 Nov 13 05:19:53.572: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7062 Nov 13 05:19:53.576: INFO: deleting *v1.Role: csi-mock-volumes-7062-9637/external-attacher-cfg-csi-mock-volumes-7062 Nov 13 05:19:53.580: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-attacher-role-cfg Nov 13 05:19:53.583: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-provisioner Nov 13 05:19:53.586: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7062 Nov 13 05:19:53.590: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7062 Nov 13 05:19:53.593: INFO: deleting *v1.Role: csi-mock-volumes-7062-9637/external-provisioner-cfg-csi-mock-volumes-7062 Nov 13 05:19:53.596: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-provisioner-role-cfg Nov 13 05:19:53.600: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-resizer Nov 13 05:19:53.603: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7062 Nov 13 05:19:53.607: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7062 Nov 13 05:19:53.613: INFO: deleting *v1.Role: csi-mock-volumes-7062-9637/external-resizer-cfg-csi-mock-volumes-7062 Nov 13 05:19:53.619: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7062-9637/csi-resizer-role-cfg Nov 13 05:19:53.626: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-snapshotter Nov 13 05:19:53.633: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7062 Nov 13 05:19:53.637: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7062 Nov 13 05:19:53.641: INFO: deleting *v1.Role: csi-mock-volumes-7062-9637/external-snapshotter-leaderelection-csi-mock-volumes-7062 Nov 13 05:19:53.645: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7062-9637/external-snapshotter-leaderelection Nov 13 05:19:53.648: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7062-9637/csi-mock Nov 13 05:19:53.651: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7062 Nov 13 05:19:53.656: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7062 Nov 13 05:19:53.659: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7062 Nov 13 05:19:53.662: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7062 Nov 13 05:19:53.666: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7062 Nov 13 05:19:53.670: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7062 Nov 13 05:19:53.676: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7062 Nov 13 05:19:53.679: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7062-9637/csi-mockplugin Nov 13 05:19:53.683: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7062 STEP: deleting the driver namespace: csi-mock-volumes-7062-9637 STEP: Waiting for namespaces [csi-mock-volumes-7062-9637] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:111.826 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":1,"skipped":54,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:19.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:20:19.118: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:21.122: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:23.123: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:25.121: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:27.122: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:29.122: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:31.121: INFO: The status of Pod test-hostpath-type-z6p8f is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:20:33.122: INFO: The status of Pod test-hostpath-type-z6p8f is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:20:33.125: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6280 PodName:test-hostpath-type-z6p8f ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:33.125: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:41.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6280" for this suite. • [SLOW TEST:22.155 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":3,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:20.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-783 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:19:20.726: INFO: creating *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-attacher Nov 13 05:19:20.730: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-783 Nov 13 05:19:20.730: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-783 Nov 13 05:19:20.733: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-783 Nov 13 05:19:20.736: INFO: creating *v1.Role: csi-mock-volumes-783-6918/external-attacher-cfg-csi-mock-volumes-783 Nov 13 05:19:20.738: INFO: creating *v1.RoleBinding: csi-mock-volumes-783-6918/csi-attacher-role-cfg Nov 13 05:19:20.741: INFO: creating *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-provisioner Nov 13 05:19:20.745: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-783 Nov 13 05:19:20.745: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-783 Nov 13 05:19:20.748: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-783 Nov 13 05:19:20.751: INFO: creating *v1.Role: csi-mock-volumes-783-6918/external-provisioner-cfg-csi-mock-volumes-783 Nov 13 05:19:20.754: INFO: creating *v1.RoleBinding: csi-mock-volumes-783-6918/csi-provisioner-role-cfg Nov 13 05:19:20.756: INFO: creating *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-resizer Nov 13 05:19:20.759: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-783 Nov 13 05:19:20.759: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-783 Nov 13 05:19:20.761: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-783 Nov 13 05:19:20.764: INFO: creating *v1.Role: csi-mock-volumes-783-6918/external-resizer-cfg-csi-mock-volumes-783 Nov 13 05:19:20.767: INFO: creating *v1.RoleBinding: csi-mock-volumes-783-6918/csi-resizer-role-cfg Nov 13 05:19:20.769: INFO: creating *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-snapshotter Nov 13 05:19:20.772: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-783 Nov 13 05:19:20.772: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-783 Nov 13 05:19:20.774: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-783 Nov 13 05:19:20.777: INFO: creating *v1.Role: csi-mock-volumes-783-6918/external-snapshotter-leaderelection-csi-mock-volumes-783 Nov 13 05:19:20.779: INFO: creating *v1.RoleBinding: csi-mock-volumes-783-6918/external-snapshotter-leaderelection Nov 13 05:19:20.782: INFO: creating *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-mock Nov 13 05:19:20.784: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-783 Nov 13 05:19:20.786: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-783 Nov 13 05:19:20.790: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-783 Nov 13 05:19:20.793: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-783 Nov 13 05:19:20.799: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-783 Nov 13 05:19:20.815: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-783 Nov 13 05:19:20.819: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-783 Nov 13 05:19:20.821: INFO: creating *v1.StatefulSet: csi-mock-volumes-783-6918/csi-mockplugin Nov 13 05:19:20.826: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-783 Nov 13 05:19:20.828: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-783" Nov 13 05:19:20.831: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-783 to register on node node2 STEP: Creating pod Nov 13 05:19:47.228: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:19:47.232: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jbd7v] to have phase Bound Nov 13 05:19:47.234: INFO: PersistentVolumeClaim pvc-jbd7v found but phase is Pending instead of Bound. Nov 13 05:19:49.238: INFO: PersistentVolumeClaim pvc-jbd7v found and phase=Bound (2.005552065s) Nov 13 05:19:49.253: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jbd7v] to have phase Bound Nov 13 05:19:49.256: INFO: PersistentVolumeClaim pvc-jbd7v found and phase=Bound (2.66827ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Nov 13 05:20:07.407: INFO: Deleting pod "pvc-volume-tester-bqfxd" in namespace "csi-mock-volumes-783" Nov 13 05:20:07.412: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bqfxd" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-bqfxd Nov 13 05:20:28.435: INFO: Deleting pod "pvc-volume-tester-bqfxd" in namespace "csi-mock-volumes-783" STEP: Deleting claim pvc-jbd7v Nov 13 05:20:28.444: INFO: Waiting up to 2m0s for PersistentVolume pvc-de15e15c-1871-42af-ba44-9d50bb7baf68 to get deleted Nov 13 05:20:28.446: INFO: PersistentVolume pvc-de15e15c-1871-42af-ba44-9d50bb7baf68 found and phase=Bound (1.895041ms) Nov 13 05:20:30.450: INFO: PersistentVolume pvc-de15e15c-1871-42af-ba44-9d50bb7baf68 was removed STEP: Deleting storageclass csi-mock-volumes-783-sc6mqh7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-783 STEP: Waiting for namespaces [csi-mock-volumes-783] to vanish STEP: uninstalling csi mock driver Nov 13 05:20:36.466: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-attacher Nov 13 05:20:36.469: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-783 Nov 13 05:20:36.473: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-783 Nov 13 05:20:36.476: INFO: deleting *v1.Role: csi-mock-volumes-783-6918/external-attacher-cfg-csi-mock-volumes-783 Nov 13 05:20:36.480: INFO: deleting *v1.RoleBinding: csi-mock-volumes-783-6918/csi-attacher-role-cfg Nov 13 05:20:36.483: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-provisioner Nov 13 05:20:36.486: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-783 Nov 13 05:20:36.490: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-783 Nov 13 05:20:36.493: INFO: deleting *v1.Role: csi-mock-volumes-783-6918/external-provisioner-cfg-csi-mock-volumes-783 Nov 13 05:20:36.496: INFO: deleting *v1.RoleBinding: csi-mock-volumes-783-6918/csi-provisioner-role-cfg Nov 13 05:20:36.499: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-resizer Nov 13 05:20:36.503: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-783 Nov 13 05:20:36.506: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-783 Nov 13 05:20:36.509: INFO: deleting *v1.Role: csi-mock-volumes-783-6918/external-resizer-cfg-csi-mock-volumes-783 Nov 13 05:20:36.512: INFO: deleting *v1.RoleBinding: csi-mock-volumes-783-6918/csi-resizer-role-cfg Nov 13 05:20:36.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-snapshotter Nov 13 05:20:36.518: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-783 Nov 13 05:20:36.522: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-783 Nov 13 05:20:36.526: INFO: deleting *v1.Role: csi-mock-volumes-783-6918/external-snapshotter-leaderelection-csi-mock-volumes-783 Nov 13 05:20:36.529: INFO: deleting *v1.RoleBinding: csi-mock-volumes-783-6918/external-snapshotter-leaderelection Nov 13 05:20:36.533: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-783-6918/csi-mock Nov 13 05:20:36.536: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-783 Nov 13 05:20:36.540: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-783 Nov 13 05:20:36.543: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-783 Nov 13 05:20:36.546: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-783 Nov 13 05:20:36.549: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-783 Nov 13 05:20:36.552: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-783 Nov 13 05:20:36.555: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-783 Nov 13 05:20:36.558: INFO: deleting *v1.StatefulSet: csi-mock-volumes-783-6918/csi-mockplugin Nov 13 05:20:36.562: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-783 STEP: deleting the driver namespace: csi-mock-volumes-783-6918 STEP: Waiting for namespaces [csi-mock-volumes-783-6918] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:42.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:81.931 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":2,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:33.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-1d945bf7-4373-4a78-8b17-23634d832ecb STEP: Creating a pod to test consume configMaps Nov 13 05:20:33.382: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886" in namespace "projected-9644" to be "Succeeded or Failed" Nov 13 05:20:33.384: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715468ms Nov 13 05:20:35.389: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006814239s Nov 13 05:20:37.393: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011753804s Nov 13 05:20:39.398: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016304665s Nov 13 05:20:41.401: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019386999s Nov 13 05:20:43.405: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023769932s STEP: Saw pod success Nov 13 05:20:43.406: INFO: Pod "pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886" satisfied condition "Succeeded or Failed" Nov 13 05:20:43.408: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886 container agnhost-container: STEP: delete the pod Nov 13 05:20:43.420: INFO: Waiting for pod pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886 to disappear Nov 13 05:20:43.422: INFO: Pod pod-projected-configmaps-683bc604-d85a-4612-b980-194671d90886 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:43.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9644" for this suite. • [SLOW TEST:10.089 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:18:45.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1113 05:18:45.728568 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:18:45.728: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:18:45.736: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151" Nov 13 05:18:49.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151" "/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:49.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9" Nov 13 05:18:49.882: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9" "/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:49.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5" Nov 13 05:18:49.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5" "/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:49.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6" Nov 13 05:18:50.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6" "/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec" Nov 13 05:18:50.194: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec" "/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f" Nov 13 05:18:50.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f" "/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53" Nov 13 05:18:50.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53" "/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190" Nov 13 05:18:50.478: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190" "/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d" Nov 13 05:18:50.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d" "/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f" Nov 13 05:18:50.733: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f" "/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:50.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def" Nov 13 05:18:54.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def" "/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:54.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166" Nov 13 05:18:55.241: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166" "/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:55.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2" Nov 13 05:18:55.370: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2" "/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:55.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9" Nov 13 05:18:55.562: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9" "/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:55.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e" Nov 13 05:18:55.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e" "/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:55.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae" Nov 13 05:18:56.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae" "/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9" Nov 13 05:18:56.523: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9" "/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6" Nov 13 05:18:56.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6" "/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18" Nov 13 05:18:56.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18" "/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd" Nov 13 05:18:56.986: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd" "/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:18:56.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Nov 13 05:19:11.636: INFO: Deleting pod pod-e2b2557c-1086-4185-b36d-b8aab9c381e3 Nov 13 05:19:11.642: INFO: Deleting PersistentVolumeClaim "pvc-g9m2c" Nov 13 05:19:11.646: INFO: Deleting PersistentVolumeClaim "pvc-mt94f" Nov 13 05:19:11.649: INFO: Deleting PersistentVolumeClaim "pvc-flttt" Nov 13 05:19:11.653: INFO: 1/28 pods finished STEP: Delete "local-pvqp6pn" and create a new PV for same local volume storage STEP: Delete "local-pvqc7sm" and create a new PV for same local volume storage STEP: Delete "local-pvx467z" and create a new PV for same local volume storage Nov 13 05:19:14.635: INFO: Deleting pod pod-56c0a13d-39ab-459a-9f24-1905fefa243a Nov 13 05:19:14.641: INFO: Deleting PersistentVolumeClaim "pvc-whzd8" Nov 13 05:19:14.646: INFO: Deleting PersistentVolumeClaim "pvc-xq2jd" Nov 13 05:19:14.650: INFO: Deleting PersistentVolumeClaim "pvc-wqhnj" Nov 13 05:19:14.653: INFO: 2/28 pods finished STEP: Delete "local-pvjxxww" and create a new PV for same local volume storage STEP: Delete "local-pvpnk9l" and create a new PV for same local volume storage STEP: Delete "local-pv7knvl" and create a new PV for same local volume storage Nov 13 05:19:15.636: INFO: Deleting pod pod-52487fc0-97b2-485d-878a-d8b3cba6aba0 Nov 13 05:19:15.641: INFO: Deleting PersistentVolumeClaim "pvc-dq2mr" Nov 13 05:19:15.645: INFO: Deleting PersistentVolumeClaim "pvc-pllgd" Nov 13 05:19:15.648: INFO: Deleting PersistentVolumeClaim "pvc-gqhsg" STEP: Delete "local-pvrjdpq" and create a new PV for same local volume storage Nov 13 05:19:15.653: INFO: 3/28 pods finished STEP: Delete "local-pv8bxb4" and create a new PV for same local volume storage STEP: Delete "local-pvpqg9v" and create a new PV for same local volume storage STEP: Delete "local-pv5mq6q" and create a new PV for same local volume storage Nov 13 05:19:17.635: INFO: Deleting pod pod-497eb95f-dda3-4b82-aad1-f70c5107297c Nov 13 05:19:17.641: INFO: Deleting PersistentVolumeClaim "pvc-9wqks" Nov 13 05:19:17.645: INFO: Deleting PersistentVolumeClaim "pvc-vx76d" Nov 13 05:19:17.649: INFO: Deleting PersistentVolumeClaim "pvc-jvjfx" Nov 13 05:19:17.653: INFO: 4/28 pods finished STEP: Delete "local-pvb7pxz" and create a new PV for same local volume storage STEP: Delete "local-pvckhmz" and create a new PV for same local volume storage STEP: Delete "local-pvmdfsf" and create a new PV for same local volume storage STEP: Delete "local-pv4jhtc" and create a new PV for same local volume storage Nov 13 05:19:19.636: INFO: Deleting pod pod-79dc470b-ae29-48a5-bdfa-797feefbe614 Nov 13 05:19:19.641: INFO: Deleting PersistentVolumeClaim "pvc-zjjfr" Nov 13 05:19:19.645: INFO: Deleting PersistentVolumeClaim "pvc-jmqxv" Nov 13 05:19:19.649: INFO: Deleting PersistentVolumeClaim "pvc-nw67c" Nov 13 05:19:19.653: INFO: 5/28 pods finished STEP: Delete "local-pv5hkcp" and create a new PV for same local volume storage STEP: Delete "local-pvg58mr" and create a new PV for same local volume storage STEP: Delete "local-pvzqtls" and create a new PV for same local volume storage STEP: Delete "local-pvp9mr6" and create a new PV for same local volume storage Nov 13 05:19:22.638: INFO: Deleting pod pod-2ae72949-6bcc-4dd3-8fab-fbbc42fbfa59 Nov 13 05:19:22.645: INFO: Deleting PersistentVolumeClaim "pvc-xglc4" Nov 13 05:19:22.649: INFO: Deleting PersistentVolumeClaim "pvc-pbsdx" Nov 13 05:19:22.653: INFO: Deleting PersistentVolumeClaim "pvc-hx5dm" Nov 13 05:19:22.659: INFO: 6/28 pods finished STEP: Delete "local-pv8j72p" and create a new PV for same local volume storage STEP: Delete "local-pv2ntrg" and create a new PV for same local volume storage STEP: Delete "local-pvk2222" and create a new PV for same local volume storage STEP: Delete "local-pvvvvmm" and create a new PV for same local volume storage Nov 13 05:19:31.637: INFO: Deleting pod pod-4989a64e-bd0a-4866-a3e3-9535b81f971c Nov 13 05:19:31.644: INFO: Deleting PersistentVolumeClaim "pvc-hcxgq" Nov 13 05:19:31.647: INFO: Deleting PersistentVolumeClaim "pvc-7gpzt" Nov 13 05:19:31.651: INFO: Deleting PersistentVolumeClaim "pvc-r7xg6" Nov 13 05:19:31.655: INFO: 7/28 pods finished STEP: Delete "local-pvsm4w5" and create a new PV for same local volume storage STEP: Delete "local-pvws4dx" and create a new PV for same local volume storage STEP: Delete "local-pvz7xnq" and create a new PV for same local volume storage STEP: Delete "local-pvkwrzx" and create a new PV for same local volume storage Nov 13 05:19:34.635: INFO: Deleting pod pod-830966e2-45e3-4ab8-a81a-906ec71da105 Nov 13 05:19:34.643: INFO: Deleting PersistentVolumeClaim "pvc-t8kgg" Nov 13 05:19:34.648: INFO: Deleting PersistentVolumeClaim "pvc-wl7d5" Nov 13 05:19:34.651: INFO: Deleting PersistentVolumeClaim "pvc-pdrtr" Nov 13 05:19:34.655: INFO: 8/28 pods finished STEP: Delete "local-pvfcbbp" and create a new PV for same local volume storage STEP: Delete "local-pvgfllp" and create a new PV for same local volume storage STEP: Delete "local-pv95smt" and create a new PV for same local volume storage Nov 13 05:19:37.635: INFO: Deleting pod pod-29a2028d-cfaa-4c8a-a5d0-8e91155de114 Nov 13 05:19:37.641: INFO: Deleting PersistentVolumeClaim "pvc-n8fr5" Nov 13 05:19:37.645: INFO: Deleting PersistentVolumeClaim "pvc-kv6hm" Nov 13 05:19:37.649: INFO: Deleting PersistentVolumeClaim "pvc-lnrvg" Nov 13 05:19:37.653: INFO: 9/28 pods finished STEP: Delete "local-pvz58hm" and create a new PV for same local volume storage STEP: Delete "local-pvtgvcx" and create a new PV for same local volume storage STEP: Delete "local-pv2pgrg" and create a new PV for same local volume storage STEP: Delete "pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a" and create a new PV for same local volume storage STEP: Delete "pvc-7879f86b-8530-4e6e-8faf-3f41852ff17a" and create a new PV for same local volume storage Nov 13 05:19:38.636: INFO: Deleting pod pod-4fca4212-62e9-45ac-90e1-8d49b39deb3b Nov 13 05:19:38.646: INFO: Deleting PersistentVolumeClaim "pvc-qvqk9" Nov 13 05:19:38.649: INFO: Deleting PersistentVolumeClaim "pvc-k9l29" Nov 13 05:19:38.654: INFO: Deleting PersistentVolumeClaim "pvc-ms4qb" Nov 13 05:19:38.658: INFO: 10/28 pods finished Nov 13 05:19:38.658: INFO: Deleting pod pod-82a0387c-315a-41ba-b3ef-8f1e7607615f Nov 13 05:19:38.665: INFO: Deleting PersistentVolumeClaim "pvc-fmr4h" STEP: Delete "local-pvdpdcr" and create a new PV for same local volume storage Nov 13 05:19:38.668: INFO: Deleting PersistentVolumeClaim "pvc-b87f8" Nov 13 05:19:38.671: INFO: Deleting PersistentVolumeClaim "pvc-mxvhj" STEP: Delete "local-pvcnkfw" and create a new PV for same local volume storage Nov 13 05:19:38.675: INFO: 11/28 pods finished STEP: Delete "local-pvl76bb" and create a new PV for same local volume storage STEP: Delete "local-pvdfszr" and create a new PV for same local volume storage STEP: Delete "local-pvxgjwg" and create a new PV for same local volume storage STEP: Delete "local-pvb6766" and create a new PV for same local volume storage Nov 13 05:19:39.637: INFO: Deleting pod pod-0f84bf75-5000-4944-9b64-65a680dd4ac7 Nov 13 05:19:39.644: INFO: Deleting PersistentVolumeClaim "pvc-chhl9" Nov 13 05:19:39.648: INFO: Deleting PersistentVolumeClaim "pvc-ght7q" Nov 13 05:19:39.651: INFO: Deleting PersistentVolumeClaim "pvc-v9d7t" Nov 13 05:19:39.655: INFO: 12/28 pods finished STEP: Delete "local-pvhphgh" and create a new PV for same local volume storage STEP: Delete "local-pv6pvgn" and create a new PV for same local volume storage STEP: Delete "local-pv9pkq6" and create a new PV for same local volume storage STEP: Delete "local-pvlqbfg" and create a new PV for same local volume storage STEP: Delete "pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3" and create a new PV for same local volume storage STEP: Delete "pvc-7177502f-4248-47dc-9b9b-246c6cbae3d3" and create a new PV for same local volume storage Nov 13 05:19:50.637: INFO: Deleting pod pod-65e4d61f-8c99-4b77-a442-bb4679bf38df Nov 13 05:19:50.644: INFO: Deleting PersistentVolumeClaim "pvc-s5hbf" Nov 13 05:19:50.650: INFO: Deleting PersistentVolumeClaim "pvc-pk9pl" Nov 13 05:19:50.654: INFO: Deleting PersistentVolumeClaim "pvc-88wtc" Nov 13 05:19:50.658: INFO: 13/28 pods finished STEP: Delete "local-pvxb8lf" and create a new PV for same local volume storage STEP: Delete "local-pv9sj8v" and create a new PV for same local volume storage STEP: Delete "local-pv2fjfx" and create a new PV for same local volume storage Nov 13 05:19:53.636: INFO: Deleting pod pod-12b25b77-27a1-48da-8242-fa43fa954d64 Nov 13 05:19:53.643: INFO: Deleting PersistentVolumeClaim "pvc-vplz5" Nov 13 05:19:53.647: INFO: Deleting PersistentVolumeClaim "pvc-lwmzq" Nov 13 05:19:53.650: INFO: Deleting PersistentVolumeClaim "pvc-bpk9t" Nov 13 05:19:53.654: INFO: 14/28 pods finished STEP: Delete "local-pvvds79" and create a new PV for same local volume storage STEP: Delete "local-pv7nlnt" and create a new PV for same local volume storage STEP: Delete "local-pv9dfpz" and create a new PV for same local volume storage Nov 13 05:19:58.637: INFO: Deleting pod pod-85a0e488-b26f-4343-8d8c-69c1feec3aff Nov 13 05:19:58.643: INFO: Deleting PersistentVolumeClaim "pvc-mbxdx" Nov 13 05:19:58.647: INFO: Deleting PersistentVolumeClaim "pvc-s7jhq" Nov 13 05:19:58.651: INFO: Deleting PersistentVolumeClaim "pvc-lzkgw" Nov 13 05:19:58.654: INFO: 15/28 pods finished Nov 13 05:19:58.654: INFO: Deleting pod pod-f2b6fc66-3b97-406d-afce-e3eea40f875b STEP: Delete "local-pvrzjxd" and create a new PV for same local volume storage Nov 13 05:19:58.662: INFO: Deleting PersistentVolumeClaim "pvc-288pm" Nov 13 05:19:58.666: INFO: Deleting PersistentVolumeClaim "pvc-dfkkc" Nov 13 05:19:58.670: INFO: Deleting PersistentVolumeClaim "pvc-dk7m6" STEP: Delete "local-pvshqcw" and create a new PV for same local volume storage Nov 13 05:19:58.673: INFO: 16/28 pods finished STEP: Delete "local-pvnttsq" and create a new PV for same local volume storage STEP: Delete "local-pvqkm2v" and create a new PV for same local volume storage STEP: Delete "local-pvw4xr4" and create a new PV for same local volume storage STEP: Delete "local-pvcqsbk" and create a new PV for same local volume storage Nov 13 05:20:00.635: INFO: Deleting pod pod-178091b9-7345-4d96-9624-0c81d39ea648 Nov 13 05:20:00.643: INFO: Deleting PersistentVolumeClaim "pvc-hfbtw" Nov 13 05:20:00.648: INFO: Deleting PersistentVolumeClaim "pvc-7wkkb" Nov 13 05:20:00.651: INFO: Deleting PersistentVolumeClaim "pvc-ccmwj" Nov 13 05:20:00.655: INFO: 17/28 pods finished STEP: Delete "local-pvnvf7s" and create a new PV for same local volume storage STEP: Delete "local-pvrm6df" and create a new PV for same local volume storage STEP: Delete "local-pvz4g8j" and create a new PV for same local volume storage STEP: Delete "local-pv62rbc" and create a new PV for same local volume storage Nov 13 05:20:02.637: INFO: Deleting pod pod-3b27114f-18b6-4c53-bf94-1337552f74d9 Nov 13 05:20:02.643: INFO: Deleting PersistentVolumeClaim "pvc-hktlw" Nov 13 05:20:02.646: INFO: Deleting PersistentVolumeClaim "pvc-4gbpq" Nov 13 05:20:02.649: INFO: Deleting PersistentVolumeClaim "pvc-v8rpg" Nov 13 05:20:02.652: INFO: 18/28 pods finished STEP: Delete "local-pvh8cp7" and create a new PV for same local volume storage STEP: Delete "local-pvfzbc6" and create a new PV for same local volume storage STEP: Delete "local-pv7h7fm" and create a new PV for same local volume storage STEP: Delete "local-pvtzdhd" and create a new PV for same local volume storage Nov 13 05:20:09.636: INFO: Deleting pod pod-103f399e-f513-4bf0-a7d7-d85092c59f31 Nov 13 05:20:09.645: INFO: Deleting PersistentVolumeClaim "pvc-nlfcq" Nov 13 05:20:09.649: INFO: Deleting PersistentVolumeClaim "pvc-f6f25" Nov 13 05:20:09.653: INFO: Deleting PersistentVolumeClaim "pvc-4q5lf" Nov 13 05:20:09.657: INFO: 19/28 pods finished Nov 13 05:20:09.657: INFO: Deleting pod pod-889521f9-6ed9-4ae7-bce3-947082882bf3 Nov 13 05:20:09.662: INFO: Deleting PersistentVolumeClaim "pvc-hv2gh" STEP: Delete "local-pvgfdwx" and create a new PV for same local volume storage Nov 13 05:20:09.666: INFO: Deleting PersistentVolumeClaim "pvc-9jnc2" Nov 13 05:20:09.670: INFO: Deleting PersistentVolumeClaim "pvc-7n4l2" Nov 13 05:20:09.674: INFO: 20/28 pods finished STEP: Delete "local-pvnc46d" and create a new PV for same local volume storage STEP: Delete "local-pvfr8fn" and create a new PV for same local volume storage STEP: Delete "local-pvrntrf" and create a new PV for same local volume storage STEP: Delete "local-pvjhfzn" and create a new PV for same local volume storage STEP: Delete "local-pvbq49w" and create a new PV for same local volume storage Nov 13 05:20:18.637: INFO: Deleting pod pod-352eded0-3d6a-4b13-884f-bff472e49fad Nov 13 05:20:18.643: INFO: Deleting PersistentVolumeClaim "pvc-zlzh5" Nov 13 05:20:18.646: INFO: Deleting PersistentVolumeClaim "pvc-6sdxq" Nov 13 05:20:18.650: INFO: Deleting PersistentVolumeClaim "pvc-vfzfg" Nov 13 05:20:18.654: INFO: 21/28 pods finished STEP: Delete "local-pv99kdh" and create a new PV for same local volume storage STEP: Delete "local-pv669n9" and create a new PV for same local volume storage STEP: Delete "local-pvcsgg2" and create a new PV for same local volume storage Nov 13 05:20:19.635: INFO: Deleting pod pod-0afde51d-9c28-4fdd-8cc7-870bb746f41c Nov 13 05:20:19.643: INFO: Deleting PersistentVolumeClaim "pvc-2b7gt" Nov 13 05:20:19.647: INFO: Deleting PersistentVolumeClaim "pvc-5qhbm" Nov 13 05:20:19.650: INFO: Deleting PersistentVolumeClaim "pvc-9zklb" Nov 13 05:20:19.653: INFO: 22/28 pods finished STEP: Delete "local-pv84rvz" and create a new PV for same local volume storage STEP: Delete "local-pvxkk7r" and create a new PV for same local volume storage STEP: Delete "local-pvcbs85" and create a new PV for same local volume storage Nov 13 05:20:20.636: INFO: Deleting pod pod-4fa3ff94-06ac-49e7-a210-46b140ebefa8 Nov 13 05:20:20.641: INFO: Deleting PersistentVolumeClaim "pvc-fwkbn" Nov 13 05:20:20.645: INFO: Deleting PersistentVolumeClaim "pvc-f2rc7" Nov 13 05:20:20.648: INFO: Deleting PersistentVolumeClaim "pvc-7t4xk" Nov 13 05:20:20.652: INFO: 23/28 pods finished STEP: Delete "local-pvc626d" and create a new PV for same local volume storage STEP: Delete "local-pvtm6t8" and create a new PV for same local volume storage STEP: Delete "local-pvltjt8" and create a new PV for same local volume storage Nov 13 05:20:23.636: INFO: Deleting pod pod-b6005d17-d9cd-482e-81e3-07303e595d89 Nov 13 05:20:23.642: INFO: Deleting PersistentVolumeClaim "pvc-m4j6p" Nov 13 05:20:23.646: INFO: Deleting PersistentVolumeClaim "pvc-7smj9" Nov 13 05:20:23.649: INFO: Deleting PersistentVolumeClaim "pvc-vfcrw" Nov 13 05:20:23.653: INFO: 24/28 pods finished STEP: Delete "local-pvjksrq" and create a new PV for same local volume storage STEP: Delete "local-pvzb26b" and create a new PV for same local volume storage STEP: Delete "local-pvxdbn6" and create a new PV for same local volume storage STEP: Delete "local-pvv2g9c" and create a new PV for same local volume storage STEP: Delete "local-pv427lp" and create a new PV for same local volume storage STEP: Delete "local-pv4d8p2" and create a new PV for same local volume storage STEP: Delete "pvc-de15e15c-1871-42af-ba44-9d50bb7baf68" and create a new PV for same local volume storage STEP: Delete "pvc-de15e15c-1871-42af-ba44-9d50bb7baf68" and create a new PV for same local volume storage STEP: Delete "pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f" and create a new PV for same local volume storage Nov 13 05:20:28.636: INFO: Deleting pod pod-e09b230c-839d-45cd-b602-ee734c362bc6 Nov 13 05:20:28.643: INFO: Deleting PersistentVolumeClaim "pvc-tc764" Nov 13 05:20:28.647: INFO: Deleting PersistentVolumeClaim "pvc-gpj6t" Nov 13 05:20:28.651: INFO: Deleting PersistentVolumeClaim "pvc-q9nqj" Nov 13 05:20:28.655: INFO: 25/28 pods finished Nov 13 05:20:28.655: INFO: Deleting pod pod-f0c3c0b6-d02b-47b6-9a79-af7511637f0b STEP: Delete "local-pv4vbsd" and create a new PV for same local volume storage Nov 13 05:20:28.660: INFO: Deleting PersistentVolumeClaim "pvc-cxsx2" Nov 13 05:20:28.664: INFO: Deleting PersistentVolumeClaim "pvc-ms87n" Nov 13 05:20:28.668: INFO: Deleting PersistentVolumeClaim "pvc-z92fr" STEP: Delete "local-pvvp4fq" and create a new PV for same local volume storage Nov 13 05:20:28.673: INFO: 26/28 pods finished STEP: Delete "local-pvc64cd" and create a new PV for same local volume storage STEP: Delete "local-pvf7zk5" and create a new PV for same local volume storage STEP: Delete "local-pvsnw2j" and create a new PV for same local volume storage STEP: Delete "local-pvlsknx" and create a new PV for same local volume storage Nov 13 05:20:30.635: INFO: Deleting pod pod-1e981c53-25a9-4012-9783-71d8d7700826 Nov 13 05:20:30.640: INFO: Deleting PersistentVolumeClaim "pvc-dmqzr" Nov 13 05:20:30.644: INFO: Deleting PersistentVolumeClaim "pvc-sv5sr" Nov 13 05:20:30.648: INFO: Deleting PersistentVolumeClaim "pvc-fc422" Nov 13 05:20:30.651: INFO: 27/28 pods finished STEP: Delete "local-pv9mrzc" and create a new PV for same local volume storage STEP: Delete "local-pvgpnws" and create a new PV for same local volume storage STEP: Delete "local-pvr6hm4" and create a new PV for same local volume storage STEP: Delete "pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f" and create a new PV for same local volume storage STEP: Delete "pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f" and create a new PV for same local volume storage STEP: Delete "local-pvrnqwh" and create a new PV for same local volume storage Nov 13 05:20:39.635: INFO: Deleting pod pod-a5312c5b-c006-4fee-9c4f-e5d060ba3dc1 Nov 13 05:20:39.641: INFO: Deleting PersistentVolumeClaim "pvc-pptbd" Nov 13 05:20:39.644: INFO: Deleting PersistentVolumeClaim "pvc-dw9v2" Nov 13 05:20:39.648: INFO: Deleting PersistentVolumeClaim "pvc-n7snl" Nov 13 05:20:39.652: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Nov 13 05:20:39.652: INFO: pvc is nil Nov 13 05:20:39.652: INFO: Deleting PersistentVolume "local-pvzsrf2" STEP: Cleaning up PVC and PV Nov 13 05:20:39.655: INFO: pvc is nil Nov 13 05:20:39.655: INFO: Deleting PersistentVolume "local-pvvxr4s" STEP: Cleaning up PVC and PV Nov 13 05:20:39.659: INFO: pvc is nil Nov 13 05:20:39.659: INFO: Deleting PersistentVolume "local-pvzfcs5" STEP: Cleaning up PVC and PV Nov 13 05:20:39.662: INFO: pvc is nil Nov 13 05:20:39.662: INFO: Deleting PersistentVolume "local-pv5zs9j" STEP: Cleaning up PVC and PV Nov 13 05:20:39.665: INFO: pvc is nil Nov 13 05:20:39.665: INFO: Deleting PersistentVolume "local-pvl4f4w" STEP: Cleaning up PVC and PV Nov 13 05:20:39.669: INFO: pvc is nil Nov 13 05:20:39.669: INFO: Deleting PersistentVolume "local-pvlrsth" STEP: Cleaning up PVC and PV Nov 13 05:20:39.673: INFO: pvc is nil Nov 13 05:20:39.673: INFO: Deleting PersistentVolume "local-pv2xg7d" STEP: Cleaning up PVC and PV Nov 13 05:20:39.676: INFO: pvc is nil Nov 13 05:20:39.676: INFO: Deleting PersistentVolume "local-pvnm5kq" STEP: Cleaning up PVC and PV Nov 13 05:20:39.679: INFO: pvc is nil Nov 13 05:20:39.679: INFO: Deleting PersistentVolume "local-pvfwxbh" STEP: Cleaning up PVC and PV Nov 13 05:20:39.683: INFO: pvc is nil Nov 13 05:20:39.683: INFO: Deleting PersistentVolume "local-pvtb4lc" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151" Nov 13 05:20:39.687: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:39.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:40.061: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8beb8b12-46fc-458a-aa43-26c1bb2f4151] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9" Nov 13 05:20:40.147: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:40.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b3f76c08-e56f-41ef-ab48-9898a0b204a9] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5" Nov 13 05:20:40.345: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:40.446: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0ae551bf-b132-4c30-a8e4-2b149236f2d5] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6" Nov 13 05:20:40.534: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:40.640: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-db0ea0a6-18f2-4b0b-9ea6-9318359ce4a6] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec" Nov 13 05:20:40.742: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:40.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-57a4b913-171c-43e4-b931-2b977fb9dfec] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f" Nov 13 05:20:40.997: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:40.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:41.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2e67c893-c521-4c6b-8f46-57751f63958f] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53" Nov 13 05:20:41.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:41.530: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7dd23526-4604-497e-823a-6efe4da5df53] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190" Nov 13 05:20:41.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:41.891: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7e2219aa-7958-4685-90e7-ab4cc4688190] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d" Nov 13 05:20:42.079: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:42.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:42.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1a36b7f6-aa10-4dd5-aca8-4bdd0d24df7d] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:42.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f" Nov 13 05:20:42.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:42.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:42.572: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-71fd0c90-fcea-4f2a-a486-537b64cba06f] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node1-nstb2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Nov 13 05:20:42.689: INFO: pvc is nil Nov 13 05:20:42.689: INFO: Deleting PersistentVolume "local-pvftw9c" STEP: Cleaning up PVC and PV Nov 13 05:20:42.695: INFO: pvc is nil Nov 13 05:20:42.695: INFO: Deleting PersistentVolume "local-pv8r4bc" STEP: Cleaning up PVC and PV Nov 13 05:20:42.698: INFO: pvc is nil Nov 13 05:20:42.698: INFO: Deleting PersistentVolume "local-pv2lvfw" STEP: Cleaning up PVC and PV Nov 13 05:20:42.702: INFO: pvc is nil Nov 13 05:20:42.702: INFO: Deleting PersistentVolume "local-pvn2tdw" STEP: Cleaning up PVC and PV Nov 13 05:20:42.709: INFO: pvc is nil Nov 13 05:20:42.709: INFO: Deleting PersistentVolume "local-pv57twr" STEP: Cleaning up PVC and PV Nov 13 05:20:42.713: INFO: pvc is nil Nov 13 05:20:42.713: INFO: Deleting PersistentVolume "local-pvg9mhb" STEP: Cleaning up PVC and PV Nov 13 05:20:42.719: INFO: pvc is nil Nov 13 05:20:42.719: INFO: Deleting PersistentVolume "local-pv4fs4n" STEP: Cleaning up PVC and PV Nov 13 05:20:42.726: INFO: pvc is nil Nov 13 05:20:42.726: INFO: Deleting PersistentVolume "local-pv4dr9n" STEP: Cleaning up PVC and PV Nov 13 05:20:42.729: INFO: pvc is nil Nov 13 05:20:42.729: INFO: Deleting PersistentVolume "local-pvxxnbn" STEP: Cleaning up PVC and PV Nov 13 05:20:42.733: INFO: pvc is nil Nov 13 05:20:42.733: INFO: Deleting PersistentVolume "local-pvbw8gk" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def" Nov 13 05:20:42.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:42.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:43.057: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5c6a4403-4724-4137-bf0e-c77d88946def] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:43.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166" Nov 13 05:20:43.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:43.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:43.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3330645d-34ea-4fd7-b830-33c9447e4166] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:43.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2" Nov 13 05:20:43.816: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:43.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:43.921: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ab5d4e7f-7273-4044-8d1f-57d1cfd8dad2] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:43.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9" Nov 13 05:20:44.014: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:44.159: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-66d6ae67-14ab-435e-ad46-d30efbb086b9] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e" Nov 13 05:20:44.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:44.467: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2d07b934-5432-4562-a7e4-23413a0d4b4e] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae" Nov 13 05:20:44.568: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:44.667: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-037000a2-62e1-411d-8219-8d216ef7ceae] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9" Nov 13 05:20:44.747: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:44.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cc7fb9a0-4a6d-4b62-9e28-8bb0ccfc79f9] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:44.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6" Nov 13 05:20:45.007: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:45.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-abbf8fe9-f7c1-4e20-a79a-c505f86beee6] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18" Nov 13 05:20:45.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:45.424: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eb42a306-8f8d-44e9-92cb-b39fc955ad18] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd" Nov 13 05:20:45.528: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd"] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:20:45.623: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0bea223b-d3df-4adb-b1ae-5a276bac09bd] Namespace:persistent-local-volumes-test-5764 PodName:hostexec-node2-8cqf4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:45.623: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5764" for this suite. • [SLOW TEST:120.017 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:45.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Nov 13 05:20:45.927: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:322 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:45.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8628" for this suite. S [SKIPPING] [0.042 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:319 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:320 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:328 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:46.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:20:50.099: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9456 PodName:hostexec-node1-kwjcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:50.099: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:50.194: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:20:50.194: INFO: exec node1: stdout: "0\n" Nov 13 05:20:50.194: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:20:50.194: INFO: exec node1: exit code: 0 Nov 13 05:20:50.194: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:20:50.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9456" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.151 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:43.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:20:53.593: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e4078bc5-c119-4b68-b50f-8066c4b6cddb-backend && ln -s /tmp/local-volume-test-e4078bc5-c119-4b68-b50f-8066c4b6cddb-backend /tmp/local-volume-test-e4078bc5-c119-4b68-b50f-8066c4b6cddb] Namespace:persistent-local-volumes-test-5124 PodName:hostexec-node2-mbcfs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:53.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:20:53.696: INFO: Creating a PV followed by a PVC Nov 13 05:20:53.704: INFO: Waiting for PV local-pvlqwh8 to bind to PVC pvc-4d6mp Nov 13 05:20:53.704: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4d6mp] to have phase Bound Nov 13 05:20:53.706: INFO: PersistentVolumeClaim pvc-4d6mp found but phase is Pending instead of Bound. Nov 13 05:20:55.709: INFO: PersistentVolumeClaim pvc-4d6mp found and phase=Bound (2.005047101s) Nov 13 05:20:55.709: INFO: Waiting up to 3m0s for PersistentVolume local-pvlqwh8 to have phase Bound Nov 13 05:20:55.711: INFO: PersistentVolume local-pvlqwh8 found and phase=Bound (2.037708ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:20:59.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5124 exec pod-d6ba3f56-1be8-43cd-a0a3-98b6723e77f3 --namespace=persistent-local-volumes-test-5124 -- stat -c %g /mnt/volume1' Nov 13 05:21:00.194: INFO: stderr: "" Nov 13 05:21:00.194: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-d6ba3f56-1be8-43cd-a0a3-98b6723e77f3 in namespace persistent-local-volumes-test-5124 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:21:00.199: INFO: Deleting PersistentVolumeClaim "pvc-4d6mp" Nov 13 05:21:00.203: INFO: Deleting PersistentVolume "local-pvlqwh8" STEP: Removing the test directory Nov 13 05:21:00.207: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e4078bc5-c119-4b68-b50f-8066c4b6cddb && rm -r /tmp/local-volume-test-e4078bc5-c119-4b68-b50f-8066c4b6cddb-backend] Namespace:persistent-local-volumes-test-5124 PodName:hostexec-node2-mbcfs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:00.207: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:00.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5124" for this suite. • [SLOW TEST:16.983 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":5,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:24.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4889 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:19:24.914: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-attacher Nov 13 05:19:24.917: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4889 Nov 13 05:19:24.917: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4889 Nov 13 05:19:24.920: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4889 Nov 13 05:19:24.923: INFO: creating *v1.Role: csi-mock-volumes-4889-4699/external-attacher-cfg-csi-mock-volumes-4889 Nov 13 05:19:24.925: INFO: creating *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-attacher-role-cfg Nov 13 05:19:24.928: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-provisioner Nov 13 05:19:24.931: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4889 Nov 13 05:19:24.931: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4889 Nov 13 05:19:24.934: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4889 Nov 13 05:19:24.937: INFO: creating *v1.Role: csi-mock-volumes-4889-4699/external-provisioner-cfg-csi-mock-volumes-4889 Nov 13 05:19:24.940: INFO: creating *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-provisioner-role-cfg Nov 13 05:19:24.943: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-resizer Nov 13 05:19:24.945: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4889 Nov 13 05:19:24.945: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4889 Nov 13 05:19:24.948: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4889 Nov 13 05:19:24.951: INFO: creating *v1.Role: csi-mock-volumes-4889-4699/external-resizer-cfg-csi-mock-volumes-4889 Nov 13 05:19:24.954: INFO: creating *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-resizer-role-cfg Nov 13 05:19:24.957: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-snapshotter Nov 13 05:19:24.960: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4889 Nov 13 05:19:24.960: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4889 Nov 13 05:19:24.962: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4889 Nov 13 05:19:24.965: INFO: creating *v1.Role: csi-mock-volumes-4889-4699/external-snapshotter-leaderelection-csi-mock-volumes-4889 Nov 13 05:19:24.967: INFO: creating *v1.RoleBinding: csi-mock-volumes-4889-4699/external-snapshotter-leaderelection Nov 13 05:19:24.970: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-mock Nov 13 05:19:24.973: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4889 Nov 13 05:19:24.975: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4889 Nov 13 05:19:24.978: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4889 Nov 13 05:19:24.981: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4889 Nov 13 05:19:24.984: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4889 Nov 13 05:19:24.986: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4889 Nov 13 05:19:24.989: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4889 Nov 13 05:19:24.992: INFO: creating *v1.StatefulSet: csi-mock-volumes-4889-4699/csi-mockplugin Nov 13 05:19:24.996: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4889 Nov 13 05:19:24.999: INFO: creating *v1.StatefulSet: csi-mock-volumes-4889-4699/csi-mockplugin-attacher Nov 13 05:19:25.003: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4889" Nov 13 05:19:25.005: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4889 to register on node node2 STEP: Creating pod Nov 13 05:19:56.414: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:20:14.441: INFO: Deleting pod "pvc-volume-tester-7qcn5" in namespace "csi-mock-volumes-4889" Nov 13 05:20:14.448: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7qcn5" to be fully deleted STEP: Deleting pod pvc-volume-tester-7qcn5 Nov 13 05:20:28.456: INFO: Deleting pod "pvc-volume-tester-7qcn5" in namespace "csi-mock-volumes-4889" STEP: Deleting claim pvc-6jcv9 Nov 13 05:20:28.464: INFO: Waiting up to 2m0s for PersistentVolume pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f to get deleted Nov 13 05:20:28.466: INFO: PersistentVolume pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f found and phase=Bound (1.862177ms) Nov 13 05:20:30.469: INFO: PersistentVolume pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f found and phase=Released (2.005310921s) Nov 13 05:20:32.475: INFO: PersistentVolume pvc-26ec3119-5f2d-4fa1-858d-19f6471fc95f was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4889 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4889 STEP: Waiting for namespaces [csi-mock-volumes-4889] to vanish STEP: uninstalling csi mock driver Nov 13 05:20:38.489: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-attacher Nov 13 05:20:38.493: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4889 Nov 13 05:20:38.497: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4889 Nov 13 05:20:38.501: INFO: deleting *v1.Role: csi-mock-volumes-4889-4699/external-attacher-cfg-csi-mock-volumes-4889 Nov 13 05:20:38.504: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-attacher-role-cfg Nov 13 05:20:38.516: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-provisioner Nov 13 05:20:38.527: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4889 Nov 13 05:20:38.535: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4889 Nov 13 05:20:38.538: INFO: deleting *v1.Role: csi-mock-volumes-4889-4699/external-provisioner-cfg-csi-mock-volumes-4889 Nov 13 05:20:38.542: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-provisioner-role-cfg Nov 13 05:20:38.545: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-resizer Nov 13 05:20:38.548: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4889 Nov 13 05:20:38.551: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4889 Nov 13 05:20:38.554: INFO: deleting *v1.Role: csi-mock-volumes-4889-4699/external-resizer-cfg-csi-mock-volumes-4889 Nov 13 05:20:38.558: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4889-4699/csi-resizer-role-cfg Nov 13 05:20:38.561: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-snapshotter Nov 13 05:20:38.564: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4889 Nov 13 05:20:38.567: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4889 Nov 13 05:20:38.570: INFO: deleting *v1.Role: csi-mock-volumes-4889-4699/external-snapshotter-leaderelection-csi-mock-volumes-4889 Nov 13 05:20:38.575: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4889-4699/external-snapshotter-leaderelection Nov 13 05:20:38.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4889-4699/csi-mock Nov 13 05:20:38.582: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4889 Nov 13 05:20:38.586: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4889 Nov 13 05:20:38.589: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4889 Nov 13 05:20:38.592: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4889 Nov 13 05:20:38.595: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4889 Nov 13 05:20:38.599: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4889 Nov 13 05:20:38.602: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4889 Nov 13 05:20:38.606: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4889-4699/csi-mockplugin Nov 13 05:20:38.610: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4889 Nov 13 05:20:38.614: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4889-4699/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4889-4699 STEP: Waiting for namespaces [csi-mock-volumes-4889-4699] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:00.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:95.793 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":2,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:00.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:21:00.745: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:00.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3598" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:44.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-6487 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:19:44.215: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-attacher Nov 13 05:19:44.217: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6487 Nov 13 05:19:44.217: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6487 Nov 13 05:19:44.221: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6487 Nov 13 05:19:44.224: INFO: creating *v1.Role: csi-mock-volumes-6487-3224/external-attacher-cfg-csi-mock-volumes-6487 Nov 13 05:19:44.227: INFO: creating *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-attacher-role-cfg Nov 13 05:19:44.230: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-provisioner Nov 13 05:19:44.232: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6487 Nov 13 05:19:44.232: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6487 Nov 13 05:19:44.235: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6487 Nov 13 05:19:44.238: INFO: creating *v1.Role: csi-mock-volumes-6487-3224/external-provisioner-cfg-csi-mock-volumes-6487 Nov 13 05:19:44.241: INFO: creating *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-provisioner-role-cfg Nov 13 05:19:44.243: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-resizer Nov 13 05:19:44.246: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6487 Nov 13 05:19:44.246: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6487 Nov 13 05:19:44.249: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6487 Nov 13 05:19:44.251: INFO: creating *v1.Role: csi-mock-volumes-6487-3224/external-resizer-cfg-csi-mock-volumes-6487 Nov 13 05:19:44.254: INFO: creating *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-resizer-role-cfg Nov 13 05:19:44.256: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-snapshotter Nov 13 05:19:44.259: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6487 Nov 13 05:19:44.259: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6487 Nov 13 05:19:44.261: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6487 Nov 13 05:19:44.264: INFO: creating *v1.Role: csi-mock-volumes-6487-3224/external-snapshotter-leaderelection-csi-mock-volumes-6487 Nov 13 05:19:44.266: INFO: creating *v1.RoleBinding: csi-mock-volumes-6487-3224/external-snapshotter-leaderelection Nov 13 05:19:44.269: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-mock Nov 13 05:19:44.271: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6487 Nov 13 05:19:44.274: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6487 Nov 13 05:19:44.276: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6487 Nov 13 05:19:44.279: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6487 Nov 13 05:19:44.282: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6487 Nov 13 05:19:44.285: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6487 Nov 13 05:19:44.287: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6487 Nov 13 05:19:44.290: INFO: creating *v1.StatefulSet: csi-mock-volumes-6487-3224/csi-mockplugin Nov 13 05:19:44.294: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6487 Nov 13 05:19:44.296: INFO: creating *v1.StatefulSet: csi-mock-volumes-6487-3224/csi-mockplugin-resizer Nov 13 05:19:44.299: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6487" Nov 13 05:19:44.302: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6487 to register on node node2 STEP: Creating pod Nov 13 05:20:10.705: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:20:10.710: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-bzcqk] to have phase Bound Nov 13 05:20:10.712: INFO: PersistentVolumeClaim pvc-bzcqk found but phase is Pending instead of Bound. Nov 13 05:20:12.715: INFO: PersistentVolumeClaim pvc-bzcqk found and phase=Bound (2.005338879s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 13 05:20:26.754: INFO: Deleting pod "pvc-volume-tester-8vx89" in namespace "csi-mock-volumes-6487" Nov 13 05:20:26.760: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8vx89" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-8vx89 Nov 13 05:20:46.785: INFO: Deleting pod "pvc-volume-tester-8vx89" in namespace "csi-mock-volumes-6487" STEP: Deleting pod pvc-volume-tester-p8vbg Nov 13 05:20:46.788: INFO: Deleting pod "pvc-volume-tester-p8vbg" in namespace "csi-mock-volumes-6487" Nov 13 05:20:46.793: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p8vbg" to be fully deleted STEP: Deleting claim pvc-bzcqk Nov 13 05:20:52.811: INFO: Waiting up to 2m0s for PersistentVolume pvc-0274495d-294d-4501-afd6-c2ff1c71cc72 to get deleted Nov 13 05:20:52.814: INFO: PersistentVolume pvc-0274495d-294d-4501-afd6-c2ff1c71cc72 found and phase=Bound (3.022091ms) Nov 13 05:20:54.817: INFO: PersistentVolume pvc-0274495d-294d-4501-afd6-c2ff1c71cc72 was removed STEP: Deleting storageclass csi-mock-volumes-6487-sclg45x STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6487 STEP: Waiting for namespaces [csi-mock-volumes-6487] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:00.831: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-attacher Nov 13 05:21:00.836: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6487 Nov 13 05:21:00.839: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6487 Nov 13 05:21:00.843: INFO: deleting *v1.Role: csi-mock-volumes-6487-3224/external-attacher-cfg-csi-mock-volumes-6487 Nov 13 05:21:00.846: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-attacher-role-cfg Nov 13 05:21:00.850: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-provisioner Nov 13 05:21:00.854: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6487 Nov 13 05:21:00.857: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6487 Nov 13 05:21:00.860: INFO: deleting *v1.Role: csi-mock-volumes-6487-3224/external-provisioner-cfg-csi-mock-volumes-6487 Nov 13 05:21:00.864: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-provisioner-role-cfg Nov 13 05:21:00.867: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-resizer Nov 13 05:21:00.870: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6487 Nov 13 05:21:00.874: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6487 Nov 13 05:21:00.877: INFO: deleting *v1.Role: csi-mock-volumes-6487-3224/external-resizer-cfg-csi-mock-volumes-6487 Nov 13 05:21:00.880: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6487-3224/csi-resizer-role-cfg Nov 13 05:21:00.883: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-snapshotter Nov 13 05:21:00.888: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6487 Nov 13 05:21:00.891: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6487 Nov 13 05:21:00.894: INFO: deleting *v1.Role: csi-mock-volumes-6487-3224/external-snapshotter-leaderelection-csi-mock-volumes-6487 Nov 13 05:21:00.898: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6487-3224/external-snapshotter-leaderelection Nov 13 05:21:00.902: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6487-3224/csi-mock Nov 13 05:21:00.905: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6487 Nov 13 05:21:00.908: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6487 Nov 13 05:21:00.911: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6487 Nov 13 05:21:00.914: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6487 Nov 13 05:21:00.917: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6487 Nov 13 05:21:00.920: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6487 Nov 13 05:21:00.922: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6487 Nov 13 05:21:00.926: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6487-3224/csi-mockplugin Nov 13 05:21:00.929: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6487 Nov 13 05:21:00.932: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6487-3224/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6487-3224 STEP: Waiting for namespaces [csi-mock-volumes-6487-3224] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:06.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:82.841 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":3,"skipped":200,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:42.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:20:54.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-db486129-88f8-4241-b4f2-b7e01dbbacb3-backend && ln -s /tmp/local-volume-test-db486129-88f8-4241-b4f2-b7e01dbbacb3-backend /tmp/local-volume-test-db486129-88f8-4241-b4f2-b7e01dbbacb3] Namespace:persistent-local-volumes-test-4938 PodName:hostexec-node2-5kwxh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:54.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:20:54.796: INFO: Creating a PV followed by a PVC Nov 13 05:20:54.803: INFO: Waiting for PV local-pv7knsq to bind to PVC pvc-bshz4 Nov 13 05:20:54.803: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-bshz4] to have phase Bound Nov 13 05:20:54.806: INFO: PersistentVolumeClaim pvc-bshz4 found but phase is Pending instead of Bound. Nov 13 05:20:56.813: INFO: PersistentVolumeClaim pvc-bshz4 found and phase=Bound (2.009556119s) Nov 13 05:20:56.813: INFO: Waiting up to 3m0s for PersistentVolume local-pv7knsq to have phase Bound Nov 13 05:20:56.816: INFO: PersistentVolume local-pv7knsq found and phase=Bound (2.731377ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:21:02.842: INFO: pod "pod-7ae7fab7-b5ac-456b-8fbe-3ae5bb278120" created on Node "node2" STEP: Writing in pod1 Nov 13 05:21:02.842: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4938 PodName:pod-7ae7fab7-b5ac-456b-8fbe-3ae5bb278120 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:02.842: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:03.015: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:21:03.015: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4938 PodName:pod-7ae7fab7-b5ac-456b-8fbe-3ae5bb278120 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:03.015: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:03.173: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-7ae7fab7-b5ac-456b-8fbe-3ae5bb278120 in namespace persistent-local-volumes-test-4938 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:21:13.202: INFO: pod "pod-be1a50ee-fa15-4793-88d7-c1a1cc0fba6f" created on Node "node2" STEP: Reading in pod2 Nov 13 05:21:13.203: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4938 PodName:pod-be1a50ee-fa15-4793-88d7-c1a1cc0fba6f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:13.203: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:13.372: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-be1a50ee-fa15-4793-88d7-c1a1cc0fba6f in namespace persistent-local-volumes-test-4938 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:21:13.377: INFO: Deleting PersistentVolumeClaim "pvc-bshz4" Nov 13 05:21:13.381: INFO: Deleting PersistentVolume "local-pv7knsq" STEP: Removing the test directory Nov 13 05:21:13.385: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-db486129-88f8-4241-b4f2-b7e01dbbacb3 && rm -r /tmp/local-volume-test-db486129-88f8-4241-b4f2-b7e01dbbacb3-backend] Namespace:persistent-local-volumes-test-4938 PodName:hostexec-node2-5kwxh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:13.385: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:13.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4938" for this suite. • [SLOW TEST:30.974 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:13.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:21:13.720: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:13.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-1303" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:33.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda" Nov 13 05:20:41.256: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda" "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda"] Namespace:persistent-local-volumes-test-1524 PodName:hostexec-node1-wgqb9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:20:41.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:20:41.516: INFO: Creating a PV followed by a PVC Nov 13 05:20:41.522: INFO: Waiting for PV local-pv62fc9 to bind to PVC pvc-9ddxb Nov 13 05:20:41.522: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9ddxb] to have phase Bound Nov 13 05:20:41.524: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:43.529: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:45.532: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:47.540: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:49.543: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:51.556: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:53.561: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:55.565: INFO: PersistentVolumeClaim pvc-9ddxb found but phase is Pending instead of Bound. Nov 13 05:20:57.570: INFO: PersistentVolumeClaim pvc-9ddxb found and phase=Bound (16.047918048s) Nov 13 05:20:57.570: INFO: Waiting up to 3m0s for PersistentVolume local-pv62fc9 to have phase Bound Nov 13 05:20:57.572: INFO: PersistentVolume local-pv62fc9 found and phase=Bound (2.213277ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:21:07.602: INFO: pod "pod-9f9af5cc-071c-4671-ae03-b0f5a24e0740" created on Node "node1" STEP: Writing in pod1 Nov 13 05:21:07.602: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1524 PodName:pod-9f9af5cc-071c-4671-ae03-b0f5a24e0740 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:07.602: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:07.760: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:21:07.760: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1524 PodName:pod-9f9af5cc-071c-4671-ae03-b0f5a24e0740 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:07.760: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:07.966: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:21:15.992: INFO: pod "pod-18b7f63a-aba6-4a0f-9997-90c4228ce1be" created on Node "node1" Nov 13 05:21:15.992: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1524 PodName:pod-18b7f63a-aba6-4a0f-9997-90c4228ce1be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:15.992: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:16.080: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:21:16.080: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1524 PodName:pod-18b7f63a-aba6-4a0f-9997-90c4228ce1be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:16.080: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:16.162: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:21:16.162: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1524 PodName:pod-9f9af5cc-071c-4671-ae03-b0f5a24e0740 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:16.162: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:21:16.242: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-9f9af5cc-071c-4671-ae03-b0f5a24e0740 in namespace persistent-local-volumes-test-1524 STEP: Deleting pod2 STEP: Deleting pod pod-18b7f63a-aba6-4a0f-9997-90c4228ce1be in namespace persistent-local-volumes-test-1524 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:21:16.251: INFO: Deleting PersistentVolumeClaim "pvc-9ddxb" Nov 13 05:21:16.254: INFO: Deleting PersistentVolume "local-pv62fc9" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda" Nov 13 05:21:16.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda"] Namespace:persistent-local-volumes-test-1524 PodName:hostexec-node1-wgqb9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:16.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:21:16.359: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-edde062a-8576-411a-a81a-48caf3ec0bda] Namespace:persistent-local-volumes-test-1524 PodName:hostexec-node1-wgqb9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:16.359: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1524" for this suite. • [SLOW TEST:43.251 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:50.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8568 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:20:50.369: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-attacher Nov 13 05:20:50.371: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8568 Nov 13 05:20:50.371: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8568 Nov 13 05:20:50.373: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8568 Nov 13 05:20:50.376: INFO: creating *v1.Role: csi-mock-volumes-8568-2256/external-attacher-cfg-csi-mock-volumes-8568 Nov 13 05:20:50.378: INFO: creating *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-attacher-role-cfg Nov 13 05:20:50.380: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-provisioner Nov 13 05:20:50.382: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8568 Nov 13 05:20:50.382: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8568 Nov 13 05:20:50.384: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8568 Nov 13 05:20:50.387: INFO: creating *v1.Role: csi-mock-volumes-8568-2256/external-provisioner-cfg-csi-mock-volumes-8568 Nov 13 05:20:50.389: INFO: creating *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-provisioner-role-cfg Nov 13 05:20:50.392: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-resizer Nov 13 05:20:50.394: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8568 Nov 13 05:20:50.394: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8568 Nov 13 05:20:50.396: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8568 Nov 13 05:20:50.398: INFO: creating *v1.Role: csi-mock-volumes-8568-2256/external-resizer-cfg-csi-mock-volumes-8568 Nov 13 05:20:50.401: INFO: creating *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-resizer-role-cfg Nov 13 05:20:50.403: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-snapshotter Nov 13 05:20:50.405: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8568 Nov 13 05:20:50.405: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8568 Nov 13 05:20:50.407: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8568 Nov 13 05:20:50.409: INFO: creating *v1.Role: csi-mock-volumes-8568-2256/external-snapshotter-leaderelection-csi-mock-volumes-8568 Nov 13 05:20:50.412: INFO: creating *v1.RoleBinding: csi-mock-volumes-8568-2256/external-snapshotter-leaderelection Nov 13 05:20:50.414: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-mock Nov 13 05:20:50.416: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8568 Nov 13 05:20:50.419: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8568 Nov 13 05:20:50.421: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8568 Nov 13 05:20:50.423: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8568 Nov 13 05:20:50.426: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8568 Nov 13 05:20:50.428: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8568 Nov 13 05:20:50.430: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8568 Nov 13 05:20:50.435: INFO: creating *v1.StatefulSet: csi-mock-volumes-8568-2256/csi-mockplugin Nov 13 05:20:50.439: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8568 Nov 13 05:20:50.442: INFO: creating *v1.StatefulSet: csi-mock-volumes-8568-2256/csi-mockplugin-attacher Nov 13 05:20:50.445: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8568" Nov 13 05:20:50.447: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8568 to register on node node2 STEP: Creating pod Nov 13 05:21:00.465: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:21:00.483: INFO: Deleting pod "pvc-volume-tester-b84tp" in namespace "csi-mock-volumes-8568" Nov 13 05:21:00.490: INFO: Wait up to 5m0s for pod "pvc-volume-tester-b84tp" to be fully deleted STEP: Deleting pod pvc-volume-tester-b84tp Nov 13 05:21:00.492: INFO: Deleting pod "pvc-volume-tester-b84tp" in namespace "csi-mock-volumes-8568" STEP: Deleting claim pvc-q6pdc STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-8568 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8568 STEP: Waiting for namespaces [csi-mock-volumes-8568] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:06.511: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-attacher Nov 13 05:21:06.516: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8568 Nov 13 05:21:06.519: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8568 Nov 13 05:21:06.522: INFO: deleting *v1.Role: csi-mock-volumes-8568-2256/external-attacher-cfg-csi-mock-volumes-8568 Nov 13 05:21:06.526: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-attacher-role-cfg Nov 13 05:21:06.530: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-provisioner Nov 13 05:21:06.533: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8568 Nov 13 05:21:06.537: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8568 Nov 13 05:21:06.541: INFO: deleting *v1.Role: csi-mock-volumes-8568-2256/external-provisioner-cfg-csi-mock-volumes-8568 Nov 13 05:21:06.544: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-provisioner-role-cfg Nov 13 05:21:06.548: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-resizer Nov 13 05:21:06.551: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8568 Nov 13 05:21:06.556: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8568 Nov 13 05:21:06.560: INFO: deleting *v1.Role: csi-mock-volumes-8568-2256/external-resizer-cfg-csi-mock-volumes-8568 Nov 13 05:21:06.563: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8568-2256/csi-resizer-role-cfg Nov 13 05:21:06.567: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-snapshotter Nov 13 05:21:06.570: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8568 Nov 13 05:21:06.574: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8568 Nov 13 05:21:06.577: INFO: deleting *v1.Role: csi-mock-volumes-8568-2256/external-snapshotter-leaderelection-csi-mock-volumes-8568 Nov 13 05:21:06.581: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8568-2256/external-snapshotter-leaderelection Nov 13 05:21:06.584: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8568-2256/csi-mock Nov 13 05:21:06.587: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8568 Nov 13 05:21:06.591: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8568 Nov 13 05:21:06.594: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8568 Nov 13 05:21:06.597: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8568 Nov 13 05:21:06.600: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8568 Nov 13 05:21:06.603: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8568 Nov 13 05:21:06.609: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8568 Nov 13 05:21:06.617: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8568-2256/csi-mockplugin Nov 13 05:21:06.625: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8568 Nov 13 05:21:06.635: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8568-2256/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8568-2256 STEP: Waiting for namespaces [csi-mock-volumes-8568-2256] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:18.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:28.367 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":2,"skipped":174,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:19:51.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-9766 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:19:51.585: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-attacher Nov 13 05:19:51.588: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9766 Nov 13 05:19:51.588: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9766 Nov 13 05:19:51.591: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9766 Nov 13 05:19:51.593: INFO: creating *v1.Role: csi-mock-volumes-9766-3595/external-attacher-cfg-csi-mock-volumes-9766 Nov 13 05:19:51.597: INFO: creating *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-attacher-role-cfg Nov 13 05:19:51.600: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-provisioner Nov 13 05:19:51.602: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9766 Nov 13 05:19:51.602: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9766 Nov 13 05:19:51.605: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9766 Nov 13 05:19:51.608: INFO: creating *v1.Role: csi-mock-volumes-9766-3595/external-provisioner-cfg-csi-mock-volumes-9766 Nov 13 05:19:51.610: INFO: creating *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-provisioner-role-cfg Nov 13 05:19:51.613: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-resizer Nov 13 05:19:51.616: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9766 Nov 13 05:19:51.616: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9766 Nov 13 05:19:51.619: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9766 Nov 13 05:19:51.622: INFO: creating *v1.Role: csi-mock-volumes-9766-3595/external-resizer-cfg-csi-mock-volumes-9766 Nov 13 05:19:51.624: INFO: creating *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-resizer-role-cfg Nov 13 05:19:51.627: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-snapshotter Nov 13 05:19:51.630: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9766 Nov 13 05:19:51.630: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9766 Nov 13 05:19:51.632: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9766 Nov 13 05:19:51.635: INFO: creating *v1.Role: csi-mock-volumes-9766-3595/external-snapshotter-leaderelection-csi-mock-volumes-9766 Nov 13 05:19:51.637: INFO: creating *v1.RoleBinding: csi-mock-volumes-9766-3595/external-snapshotter-leaderelection Nov 13 05:19:51.640: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-mock Nov 13 05:19:51.642: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9766 Nov 13 05:19:51.645: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9766 Nov 13 05:19:51.647: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9766 Nov 13 05:19:51.650: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9766 Nov 13 05:19:51.652: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9766 Nov 13 05:19:51.656: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9766 Nov 13 05:19:51.659: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9766 Nov 13 05:19:51.662: INFO: creating *v1.StatefulSet: csi-mock-volumes-9766-3595/csi-mockplugin Nov 13 05:19:51.667: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9766 Nov 13 05:19:51.670: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9766" Nov 13 05:19:51.672: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9766 to register on node node2 STEP: Creating pod with fsGroup Nov 13 05:20:12.981: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:20:12.986: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z4tm9] to have phase Bound Nov 13 05:20:12.988: INFO: PersistentVolumeClaim pvc-z4tm9 found but phase is Pending instead of Bound. Nov 13 05:20:14.992: INFO: PersistentVolumeClaim pvc-z4tm9 found and phase=Bound (2.005480865s) Nov 13 05:20:27.018: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-9766] Namespace:csi-mock-volumes-9766 PodName:pvc-volume-tester-z8jpm ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:27.018: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:28.400: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-9766/csi-mock-volumes-9766'; sync] Namespace:csi-mock-volumes-9766 PodName:pvc-volume-tester-z8jpm ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:28.400: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:31.196: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-9766/csi-mock-volumes-9766] Namespace:csi-mock-volumes-9766 PodName:pvc-volume-tester-z8jpm ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:31.196: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:31.446: INFO: pod csi-mock-volumes-9766/pvc-volume-tester-z8jpm exec for cmd ls -l /mnt/test/csi-mock-volumes-9766/csi-mock-volumes-9766, stdout: -rw-r--r-- 1 root 14330 13 Nov 13 05:20 /mnt/test/csi-mock-volumes-9766/csi-mock-volumes-9766, stderr: Nov 13 05:20:31.446: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-9766] Namespace:csi-mock-volumes-9766 PodName:pvc-volume-tester-z8jpm ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:20:31.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-z8jpm Nov 13 05:20:31.597: INFO: Deleting pod "pvc-volume-tester-z8jpm" in namespace "csi-mock-volumes-9766" Nov 13 05:20:31.602: INFO: Wait up to 5m0s for pod "pvc-volume-tester-z8jpm" to be fully deleted STEP: Deleting claim pvc-z4tm9 Nov 13 05:21:07.618: INFO: Waiting up to 2m0s for PersistentVolume pvc-618d13f1-2b32-413b-92d0-d3d39311e94b to get deleted Nov 13 05:21:07.620: INFO: PersistentVolume pvc-618d13f1-2b32-413b-92d0-d3d39311e94b found and phase=Bound (1.91108ms) Nov 13 05:21:09.625: INFO: PersistentVolume pvc-618d13f1-2b32-413b-92d0-d3d39311e94b was removed STEP: Deleting storageclass csi-mock-volumes-9766-sct888n STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9766 STEP: Waiting for namespaces [csi-mock-volumes-9766] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:15.641: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-attacher Nov 13 05:21:15.645: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9766 Nov 13 05:21:15.650: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9766 Nov 13 05:21:15.653: INFO: deleting *v1.Role: csi-mock-volumes-9766-3595/external-attacher-cfg-csi-mock-volumes-9766 Nov 13 05:21:15.656: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-attacher-role-cfg Nov 13 05:21:15.660: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-provisioner Nov 13 05:21:15.664: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9766 Nov 13 05:21:15.667: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9766 Nov 13 05:21:15.671: INFO: deleting *v1.Role: csi-mock-volumes-9766-3595/external-provisioner-cfg-csi-mock-volumes-9766 Nov 13 05:21:15.675: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-provisioner-role-cfg Nov 13 05:21:15.678: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-resizer Nov 13 05:21:15.681: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9766 Nov 13 05:21:15.684: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9766 Nov 13 05:21:15.687: INFO: deleting *v1.Role: csi-mock-volumes-9766-3595/external-resizer-cfg-csi-mock-volumes-9766 Nov 13 05:21:15.691: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9766-3595/csi-resizer-role-cfg Nov 13 05:21:15.694: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-snapshotter Nov 13 05:21:15.697: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9766 Nov 13 05:21:15.701: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9766 Nov 13 05:21:15.704: INFO: deleting *v1.Role: csi-mock-volumes-9766-3595/external-snapshotter-leaderelection-csi-mock-volumes-9766 Nov 13 05:21:15.708: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9766-3595/external-snapshotter-leaderelection Nov 13 05:21:15.717: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9766-3595/csi-mock Nov 13 05:21:15.720: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9766 Nov 13 05:21:15.724: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9766 Nov 13 05:21:15.731: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9766 Nov 13 05:21:15.735: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9766 Nov 13 05:21:15.739: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9766 Nov 13 05:21:15.743: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9766 Nov 13 05:21:15.748: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9766 Nov 13 05:21:15.751: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9766-3595/csi-mockplugin Nov 13 05:21:15.756: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9766 STEP: deleting the driver namespace: csi-mock-volumes-9766-3595 STEP: Waiting for namespaces [csi-mock-volumes-9766-3595] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:90.260 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":5,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:16.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:21:22.687: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-674d949a-0df2-44d6-af4c-3d40e372c43f] Namespace:persistent-local-volumes-test-8761 PodName:hostexec-node2-fhpdb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:22.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:21:22.774: INFO: Creating a PV followed by a PVC Nov 13 05:21:22.782: INFO: Waiting for PV local-pv7n6nr to bind to PVC pvc-cnq2c Nov 13 05:21:22.782: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cnq2c] to have phase Bound Nov 13 05:21:22.785: INFO: PersistentVolumeClaim pvc-cnq2c found but phase is Pending instead of Bound. Nov 13 05:21:24.790: INFO: PersistentVolumeClaim pvc-cnq2c found and phase=Bound (2.008222895s) Nov 13 05:21:24.790: INFO: Waiting up to 3m0s for PersistentVolume local-pv7n6nr to have phase Bound Nov 13 05:21:24.792: INFO: PersistentVolume local-pv7n6nr found and phase=Bound (1.865873ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:21:28.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8761 exec pod-a73921d5-95a9-4ae4-8611-de5748e5e03f --namespace=persistent-local-volumes-test-8761 -- stat -c %g /mnt/volume1' Nov 13 05:21:29.102: INFO: stderr: "" Nov 13 05:21:29.102: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:21:33.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8761 exec pod-4cf069c6-a7e9-40db-b2d9-1ca687a5f873 --namespace=persistent-local-volumes-test-8761 -- stat -c %g /mnt/volume1' Nov 13 05:21:33.461: INFO: stderr: "" Nov 13 05:21:33.462: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-a73921d5-95a9-4ae4-8611-de5748e5e03f in namespace persistent-local-volumes-test-8761 STEP: Deleting second pod STEP: Deleting pod pod-4cf069c6-a7e9-40db-b2d9-1ca687a5f873 in namespace persistent-local-volumes-test-8761 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:21:33.472: INFO: Deleting PersistentVolumeClaim "pvc-cnq2c" Nov 13 05:21:33.476: INFO: Deleting PersistentVolume "local-pv7n6nr" STEP: Removing the test directory Nov 13 05:21:33.479: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-674d949a-0df2-44d6-af4c-3d40e372c43f] Namespace:persistent-local-volumes-test-8761 PodName:hostexec-node2-fhpdb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:33.479: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:33.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8761" for this suite. • [SLOW TEST:16.932 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":174,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:33.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Nov 13 05:21:33.621: INFO: Waiting up to 5m0s for pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca" in namespace "emptydir-4008" to be "Succeeded or Failed" Nov 13 05:21:33.623: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 1.913604ms Nov 13 05:21:35.626: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00495272s Nov 13 05:21:37.631: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00975136s Nov 13 05:21:39.636: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015543022s Nov 13 05:21:41.642: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020800205s STEP: Saw pod success Nov 13 05:21:41.642: INFO: Pod "pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca" satisfied condition "Succeeded or Failed" Nov 13 05:21:41.644: INFO: Trying to get logs from node node2 pod pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca container test-container: STEP: delete the pod Nov 13 05:21:41.656: INFO: Waiting for pod pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca to disappear Nov 13 05:21:41.658: INFO: Pod pod-3acf4e76-68d5-4f88-a776-a3242c39f3ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:41.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4008" for this suite. • [SLOW TEST:8.077 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":7,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:41.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-4962 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:20:41.377: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-attacher Nov 13 05:20:41.379: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4962 Nov 13 05:20:41.379: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4962 Nov 13 05:20:41.382: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4962 Nov 13 05:20:41.385: INFO: creating *v1.Role: csi-mock-volumes-4962-322/external-attacher-cfg-csi-mock-volumes-4962 Nov 13 05:20:41.388: INFO: creating *v1.RoleBinding: csi-mock-volumes-4962-322/csi-attacher-role-cfg Nov 13 05:20:41.390: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-provisioner Nov 13 05:20:41.392: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4962 Nov 13 05:20:41.392: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4962 Nov 13 05:20:41.395: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4962 Nov 13 05:20:41.398: INFO: creating *v1.Role: csi-mock-volumes-4962-322/external-provisioner-cfg-csi-mock-volumes-4962 Nov 13 05:20:41.400: INFO: creating *v1.RoleBinding: csi-mock-volumes-4962-322/csi-provisioner-role-cfg Nov 13 05:20:41.403: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-resizer Nov 13 05:20:41.406: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4962 Nov 13 05:20:41.406: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4962 Nov 13 05:20:41.408: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4962 Nov 13 05:20:41.411: INFO: creating *v1.Role: csi-mock-volumes-4962-322/external-resizer-cfg-csi-mock-volumes-4962 Nov 13 05:20:41.414: INFO: creating *v1.RoleBinding: csi-mock-volumes-4962-322/csi-resizer-role-cfg Nov 13 05:20:41.421: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-snapshotter Nov 13 05:20:41.423: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4962 Nov 13 05:20:41.423: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4962 Nov 13 05:20:41.426: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4962 Nov 13 05:20:41.428: INFO: creating *v1.Role: csi-mock-volumes-4962-322/external-snapshotter-leaderelection-csi-mock-volumes-4962 Nov 13 05:20:41.431: INFO: creating *v1.RoleBinding: csi-mock-volumes-4962-322/external-snapshotter-leaderelection Nov 13 05:20:41.436: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-mock Nov 13 05:20:41.438: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4962 Nov 13 05:20:41.441: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4962 Nov 13 05:20:41.444: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4962 Nov 13 05:20:41.446: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4962 Nov 13 05:20:41.449: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4962 Nov 13 05:20:41.451: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4962 Nov 13 05:20:41.454: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4962 Nov 13 05:20:41.456: INFO: creating *v1.StatefulSet: csi-mock-volumes-4962-322/csi-mockplugin Nov 13 05:20:41.460: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4962 Nov 13 05:20:41.463: INFO: creating *v1.StatefulSet: csi-mock-volumes-4962-322/csi-mockplugin-attacher Nov 13 05:20:41.466: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4962" Nov 13 05:20:41.469: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4962 to register on node node2 STEP: Creating pod Nov 13 05:20:50.984: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:20:50.989: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dpfxv] to have phase Bound Nov 13 05:20:50.990: INFO: PersistentVolumeClaim pvc-dpfxv found but phase is Pending instead of Bound. Nov 13 05:20:52.995: INFO: PersistentVolumeClaim pvc-dpfxv found and phase=Bound (2.00596758s) STEP: Deleting the previously created pod Nov 13 05:21:18.018: INFO: Deleting pod "pvc-volume-tester-zwzwm" in namespace "csi-mock-volumes-4962" Nov 13 05:21:18.025: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zwzwm" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:21:22.045: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IktyTWkzSmRiTk51TF94Sm9XenUydlB2clE4ZDB1UU02V1V1TV9Dc0VvV2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjM2NzgxNDcxLCJpYXQiOjE2MzY3ODA4NzEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTQ5NjIiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLXp3endtIiwidWlkIjoiYzBkZjBiYjYtNjAyMy00MzE5LWE4MWItOTUzZWYyODAwMDk4In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiOWZiODMwYWMtNmYyYS00M2Y2LTg3MzUtN2Q4NWFmNzUwMGJiIn19LCJuYmYiOjE2MzY3ODA4NzEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTQ5NjI6ZGVmYXVsdCJ9.vfC4XIzImxHWHxw507sqYTHF1FTDoCMRx3ZuGkfLW5Kds-BxsSTkNIJYaSEWMlwIfkVuQmZnUAXYfE67gWNQS_nQvb2mGWB9oCB6gZ7PyarrjY3EF_KwJNtzZBMA9xpdDei16ZBcs11iDtpP7iN1kdV_DvgP6FQdX4nKd9nO9Ff8x7NFpahlHHGWnFUY0xV9HzcJzlBvIkZgZGwNo0x-IYr9RPJoddhjZdgBtOiayctLGIuR3dKiWIFVKzFL258pLLcvzrEDNqQFWKWwpuptklpbuB5XPWTd7I6fUPxqtJ3Dyjos0v1aQKaVeSbpTfgaS6G_RsrOwUgWt6nUg5bX5Q","expirationTimestamp":"2021-11-13T05:31:11Z"}} Nov 13 05:21:22.045: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c0df0bb6-6023-4319-a81b-953ef2800098/volumes/kubernetes.io~csi/pvc-9baee5d9-d4a4-42d8-84c9-b6ebeabe092e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-zwzwm Nov 13 05:21:22.045: INFO: Deleting pod "pvc-volume-tester-zwzwm" in namespace "csi-mock-volumes-4962" STEP: Deleting claim pvc-dpfxv Nov 13 05:21:22.053: INFO: Waiting up to 2m0s for PersistentVolume pvc-9baee5d9-d4a4-42d8-84c9-b6ebeabe092e to get deleted Nov 13 05:21:22.055: INFO: PersistentVolume pvc-9baee5d9-d4a4-42d8-84c9-b6ebeabe092e found and phase=Bound (2.13808ms) Nov 13 05:21:24.058: INFO: PersistentVolume pvc-9baee5d9-d4a4-42d8-84c9-b6ebeabe092e was removed STEP: Deleting storageclass csi-mock-volumes-4962-scrsqt4 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4962 STEP: Waiting for namespaces [csi-mock-volumes-4962] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:30.072: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-attacher Nov 13 05:21:30.075: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4962 Nov 13 05:21:30.080: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4962 Nov 13 05:21:30.083: INFO: deleting *v1.Role: csi-mock-volumes-4962-322/external-attacher-cfg-csi-mock-volumes-4962 Nov 13 05:21:30.086: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4962-322/csi-attacher-role-cfg Nov 13 05:21:30.090: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-provisioner Nov 13 05:21:30.093: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4962 Nov 13 05:21:30.096: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4962 Nov 13 05:21:30.100: INFO: deleting *v1.Role: csi-mock-volumes-4962-322/external-provisioner-cfg-csi-mock-volumes-4962 Nov 13 05:21:30.103: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4962-322/csi-provisioner-role-cfg Nov 13 05:21:30.106: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-resizer Nov 13 05:21:30.110: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4962 Nov 13 05:21:30.113: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4962 Nov 13 05:21:30.117: INFO: deleting *v1.Role: csi-mock-volumes-4962-322/external-resizer-cfg-csi-mock-volumes-4962 Nov 13 05:21:30.120: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4962-322/csi-resizer-role-cfg Nov 13 05:21:30.125: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-snapshotter Nov 13 05:21:30.128: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4962 Nov 13 05:21:30.131: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4962 Nov 13 05:21:30.135: INFO: deleting *v1.Role: csi-mock-volumes-4962-322/external-snapshotter-leaderelection-csi-mock-volumes-4962 Nov 13 05:21:30.138: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4962-322/external-snapshotter-leaderelection Nov 13 05:21:30.142: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4962-322/csi-mock Nov 13 05:21:30.146: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4962 Nov 13 05:21:30.149: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4962 Nov 13 05:21:30.153: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4962 Nov 13 05:21:30.156: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4962 Nov 13 05:21:30.159: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4962 Nov 13 05:21:30.162: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4962 Nov 13 05:21:30.165: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4962 Nov 13 05:21:30.168: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4962-322/csi-mockplugin Nov 13 05:21:30.173: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4962 Nov 13 05:21:30.176: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4962-322/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4962-322 STEP: Waiting for namespaces [csi-mock-volumes-4962-322] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:42.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.889 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":4,"skipped":148,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:42.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Nov 13 05:21:42.233: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:42.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2991" for this suite. S [SKIPPING] [0.032 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:404 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:30.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-9734 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:20:30.629: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-attacher Nov 13 05:20:30.633: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9734 Nov 13 05:20:30.633: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9734 Nov 13 05:20:30.635: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9734 Nov 13 05:20:30.638: INFO: creating *v1.Role: csi-mock-volumes-9734-7622/external-attacher-cfg-csi-mock-volumes-9734 Nov 13 05:20:30.640: INFO: creating *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-attacher-role-cfg Nov 13 05:20:30.643: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-provisioner Nov 13 05:20:30.645: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9734 Nov 13 05:20:30.645: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9734 Nov 13 05:20:30.648: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9734 Nov 13 05:20:30.651: INFO: creating *v1.Role: csi-mock-volumes-9734-7622/external-provisioner-cfg-csi-mock-volumes-9734 Nov 13 05:20:30.654: INFO: creating *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-provisioner-role-cfg Nov 13 05:20:30.657: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-resizer Nov 13 05:20:30.659: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9734 Nov 13 05:20:30.659: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9734 Nov 13 05:20:30.662: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9734 Nov 13 05:20:30.664: INFO: creating *v1.Role: csi-mock-volumes-9734-7622/external-resizer-cfg-csi-mock-volumes-9734 Nov 13 05:20:30.667: INFO: creating *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-resizer-role-cfg Nov 13 05:20:30.669: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-snapshotter Nov 13 05:20:30.672: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9734 Nov 13 05:20:30.672: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9734 Nov 13 05:20:30.675: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9734 Nov 13 05:20:30.677: INFO: creating *v1.Role: csi-mock-volumes-9734-7622/external-snapshotter-leaderelection-csi-mock-volumes-9734 Nov 13 05:20:30.680: INFO: creating *v1.RoleBinding: csi-mock-volumes-9734-7622/external-snapshotter-leaderelection Nov 13 05:20:30.682: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-mock Nov 13 05:20:30.685: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9734 Nov 13 05:20:30.687: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9734 Nov 13 05:20:30.690: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9734 Nov 13 05:20:30.693: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9734 Nov 13 05:20:30.696: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9734 Nov 13 05:20:30.702: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9734 Nov 13 05:20:30.705: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9734 Nov 13 05:20:30.707: INFO: creating *v1.StatefulSet: csi-mock-volumes-9734-7622/csi-mockplugin Nov 13 05:20:30.712: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9734 Nov 13 05:20:30.715: INFO: creating *v1.StatefulSet: csi-mock-volumes-9734-7622/csi-mockplugin-attacher Nov 13 05:20:30.722: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9734" Nov 13 05:20:30.728: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9734 to register on node node2 STEP: Creating pod Nov 13 05:20:40.247: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:20:40.251: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4spl9] to have phase Bound Nov 13 05:20:40.253: INFO: PersistentVolumeClaim pvc-4spl9 found but phase is Pending instead of Bound. Nov 13 05:20:42.259: INFO: PersistentVolumeClaim pvc-4spl9 found and phase=Bound (2.007547483s) STEP: checking for CSIInlineVolumes feature Nov 13 05:21:02.295: INFO: Pod inline-volume-xthq9 has the following logs: Nov 13 05:21:02.300: INFO: Deleting pod "inline-volume-xthq9" in namespace "csi-mock-volumes-9734" Nov 13 05:21:02.304: INFO: Wait up to 5m0s for pod "inline-volume-xthq9" to be fully deleted STEP: Deleting the previously created pod Nov 13 05:21:12.312: INFO: Deleting pod "pvc-volume-tester-rkqqn" in namespace "csi-mock-volumes-9734" Nov 13 05:21:12.316: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rkqqn" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:21:22.330: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Nov 13 05:21:22.330: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-rkqqn Nov 13 05:21:22.330: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9734 Nov 13 05:21:22.330: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: b921707b-d532-4c45-9a33-3d9eb2a99202 Nov 13 05:21:22.330: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 13 05:21:22.330: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b921707b-d532-4c45-9a33-3d9eb2a99202/volumes/kubernetes.io~csi/pvc-8440a838-3194-441d-961a-aa96f2c9f037/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-rkqqn Nov 13 05:21:22.330: INFO: Deleting pod "pvc-volume-tester-rkqqn" in namespace "csi-mock-volumes-9734" STEP: Deleting claim pvc-4spl9 Nov 13 05:21:22.338: INFO: Waiting up to 2m0s for PersistentVolume pvc-8440a838-3194-441d-961a-aa96f2c9f037 to get deleted Nov 13 05:21:22.340: INFO: PersistentVolume pvc-8440a838-3194-441d-961a-aa96f2c9f037 found and phase=Bound (1.887272ms) Nov 13 05:21:24.344: INFO: PersistentVolume pvc-8440a838-3194-441d-961a-aa96f2c9f037 was removed STEP: Deleting storageclass csi-mock-volumes-9734-scrdv2n STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9734 STEP: Waiting for namespaces [csi-mock-volumes-9734] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:30.358: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-attacher Nov 13 05:21:30.361: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9734 Nov 13 05:21:30.364: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9734 Nov 13 05:21:30.368: INFO: deleting *v1.Role: csi-mock-volumes-9734-7622/external-attacher-cfg-csi-mock-volumes-9734 Nov 13 05:21:30.371: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-attacher-role-cfg Nov 13 05:21:30.375: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-provisioner Nov 13 05:21:30.379: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9734 Nov 13 05:21:30.383: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9734 Nov 13 05:21:30.386: INFO: deleting *v1.Role: csi-mock-volumes-9734-7622/external-provisioner-cfg-csi-mock-volumes-9734 Nov 13 05:21:30.389: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-provisioner-role-cfg Nov 13 05:21:30.394: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-resizer Nov 13 05:21:30.398: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9734 Nov 13 05:21:30.401: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9734 Nov 13 05:21:30.404: INFO: deleting *v1.Role: csi-mock-volumes-9734-7622/external-resizer-cfg-csi-mock-volumes-9734 Nov 13 05:21:30.407: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9734-7622/csi-resizer-role-cfg Nov 13 05:21:30.411: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-snapshotter Nov 13 05:21:30.417: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9734 Nov 13 05:21:30.425: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9734 Nov 13 05:21:30.434: INFO: deleting *v1.Role: csi-mock-volumes-9734-7622/external-snapshotter-leaderelection-csi-mock-volumes-9734 Nov 13 05:21:30.441: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9734-7622/external-snapshotter-leaderelection Nov 13 05:21:30.444: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9734-7622/csi-mock Nov 13 05:21:30.447: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9734 Nov 13 05:21:30.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9734 Nov 13 05:21:30.454: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9734 Nov 13 05:21:30.457: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9734 Nov 13 05:21:30.461: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9734 Nov 13 05:21:30.464: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9734 Nov 13 05:21:30.468: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9734 Nov 13 05:21:30.473: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9734-7622/csi-mockplugin Nov 13 05:21:30.477: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9734 Nov 13 05:21:30.481: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9734-7622/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9734-7622 STEP: Waiting for namespaces [csi-mock-volumes-9734-7622] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:42.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.942 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":6,"skipped":203,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:42.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Nov 13 05:21:42.535: INFO: Waiting up to 5m0s for pod "metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5" in namespace "downward-api-415" to be "Succeeded or Failed" Nov 13 05:21:42.537: INFO: Pod "metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423507ms Nov 13 05:21:44.541: INFO: Pod "metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005648763s Nov 13 05:21:46.545: INFO: Pod "metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01029742s STEP: Saw pod success Nov 13 05:21:46.545: INFO: Pod "metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5" satisfied condition "Succeeded or Failed" Nov 13 05:21:46.548: INFO: Trying to get logs from node node2 pod metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5 container client-container: STEP: delete the pod Nov 13 05:21:46.561: INFO: Waiting for pod metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5 to disappear Nov 13 05:21:46.563: INFO: Pod metadata-volume-a1b9b306-a50b-4a95-92fe-3c3a379073a5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:46.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-415" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":204,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:46.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Nov 13 05:21:46.611: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:46.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-6337" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:46.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:21:46.658: INFO: The status of Pod test-hostpath-type-hsxjx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:48.664: INFO: The status of Pod test-hostpath-type-hsxjx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:50.662: INFO: The status of Pod test-hostpath-type-hsxjx is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:21:50.665: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3223 PodName:test-hostpath-type-hsxjx ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:50.665: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:52.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3223" for this suite. • [SLOW TEST:6.299 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":8,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:41.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:21:41.782: INFO: The status of Pod test-hostpath-type-jkkx2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:43.786: INFO: The status of Pod test-hostpath-type-jkkx2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:45.787: INFO: The status of Pod test-hostpath-type-jkkx2 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:21:53.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-761" for this suite. • [SLOW TEST:12.102 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":8,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:53.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:21:53.954: INFO: The status of Pod test-hostpath-type-62blm is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:55.958: INFO: The status of Pod test-hostpath-type-62blm is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:57.960: INFO: The status of Pod test-hostpath-type-62blm is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:59.960: INFO: The status of Pod test-hostpath-type-62blm is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:22:01.959: INFO: The status of Pod test-hostpath-type-62blm is Running (Ready = true) STEP: running on node node2 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:05.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-3063" for this suite. • [SLOW TEST:12.073 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":9,"skipped":249,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:42.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:21:46.349: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500 && mount --bind /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500 /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500] Namespace:persistent-local-volumes-test-3303 PodName:hostexec-node2-52v9l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:21:46.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:21:46.555: INFO: Creating a PV followed by a PVC Nov 13 05:21:46.561: INFO: Waiting for PV local-pvpc96g to bind to PVC pvc-8zsx2 Nov 13 05:21:46.561: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8zsx2] to have phase Bound Nov 13 05:21:46.564: INFO: PersistentVolumeClaim pvc-8zsx2 found but phase is Pending instead of Bound. Nov 13 05:21:48.568: INFO: PersistentVolumeClaim pvc-8zsx2 found but phase is Pending instead of Bound. Nov 13 05:21:50.571: INFO: PersistentVolumeClaim pvc-8zsx2 found but phase is Pending instead of Bound. Nov 13 05:21:52.577: INFO: PersistentVolumeClaim pvc-8zsx2 found but phase is Pending instead of Bound. Nov 13 05:21:54.580: INFO: PersistentVolumeClaim pvc-8zsx2 found but phase is Pending instead of Bound. Nov 13 05:21:56.583: INFO: PersistentVolumeClaim pvc-8zsx2 found and phase=Bound (10.021487281s) Nov 13 05:21:56.583: INFO: Waiting up to 3m0s for PersistentVolume local-pvpc96g to have phase Bound Nov 13 05:21:56.586: INFO: PersistentVolume local-pvpc96g found and phase=Bound (2.645214ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:22:02.614: INFO: pod "pod-5e395e4f-582d-4bab-9291-3e8936073a8d" created on Node "node2" STEP: Writing in pod1 Nov 13 05:22:02.614: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3303 PodName:pod-5e395e4f-582d-4bab-9291-3e8936073a8d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:02.614: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:02.697: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:22:02.697: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3303 PodName:pod-5e395e4f-582d-4bab-9291-3e8936073a8d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:02.697: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:02.780: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:22:08.801: INFO: pod "pod-1144d020-e2a0-4836-8ca2-b71a4d719008" created on Node "node2" Nov 13 05:22:08.801: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3303 PodName:pod-1144d020-e2a0-4836-8ca2-b71a4d719008 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:08.801: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:08.928: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:22:08.928: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3303 PodName:pod-1144d020-e2a0-4836-8ca2-b71a4d719008 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:08.928: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:09.014: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:22:09.014: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3303 PodName:pod-5e395e4f-582d-4bab-9291-3e8936073a8d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:09.014: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:09.108: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5e395e4f-582d-4bab-9291-3e8936073a8d in namespace persistent-local-volumes-test-3303 STEP: Deleting pod2 STEP: Deleting pod pod-1144d020-e2a0-4836-8ca2-b71a4d719008 in namespace persistent-local-volumes-test-3303 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:22:09.118: INFO: Deleting PersistentVolumeClaim "pvc-8zsx2" Nov 13 05:22:09.121: INFO: Deleting PersistentVolume "local-pvpc96g" STEP: Removing the test directory Nov 13 05:22:09.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500 && rm -r /tmp/local-volume-test-943276d1-1eba-4465-8a51-669ca93ca500] Namespace:persistent-local-volumes-test-3303 PodName:hostexec-node2-52v9l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:09.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:09.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3303" for this suite. • [SLOW TEST:26.942 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:06.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1" Nov 13 05:22:10.067: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1 && dd if=/dev/zero of=/tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1/file] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:10.067: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:10.430: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:10.430: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:10.603: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1 && chmod o+rwx /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:10.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:22:10.952: INFO: Creating a PV followed by a PVC Nov 13 05:22:10.959: INFO: Waiting for PV local-pvt8z8n to bind to PVC pvc-kjt8v Nov 13 05:22:10.959: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kjt8v] to have phase Bound Nov 13 05:22:10.962: INFO: PersistentVolumeClaim pvc-kjt8v found but phase is Pending instead of Bound. Nov 13 05:22:12.966: INFO: PersistentVolumeClaim pvc-kjt8v found and phase=Bound (2.006809776s) Nov 13 05:22:12.966: INFO: Waiting up to 3m0s for PersistentVolume local-pvt8z8n to have phase Bound Nov 13 05:22:12.968: INFO: PersistentVolume local-pvt8z8n found and phase=Bound (2.234451ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:22:12.972: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:22:12.974: INFO: Deleting PersistentVolumeClaim "pvc-kjt8v" Nov 13 05:22:12.978: INFO: Deleting PersistentVolume "local-pvt8z8n" Nov 13 05:22:12.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:12.982: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:13.084: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:13.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1/file Nov 13 05:22:13.190: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:13.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1 Nov 13 05:22:13.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9127f82c-bdaf-4e3b-b0b8-0ffe621931a1] Namespace:persistent-local-volumes-test-25 PodName:hostexec-node2-d62h7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:13.282: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:13.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-25" for this suite. S [SKIPPING] [7.369 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:13.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:22:13.425: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:13.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-7183" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:13.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 13 05:22:13.477: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 13 05:22:13.482: INFO: Waiting up to 30s for PersistentVolume hostpath-vnjcd to have phase Available Nov 13 05:22:13.485: INFO: PersistentVolume hostpath-vnjcd found and phase=Available (2.611586ms) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Nov 13 05:22:13.491: INFO: Waiting up to 3m0s for PersistentVolume hostpath-vnjcd to get deleted Nov 13 05:22:13.495: INFO: PersistentVolume hostpath-vnjcd found and phase=Available (3.545772ms) Nov 13 05:22:15.497: INFO: PersistentVolume hostpath-vnjcd was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:15.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-5643" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 13 05:22:15.506: INFO: AfterEach: Cleaning up test resources. Nov 13 05:22:15.506: INFO: pvc is nil Nov 13 05:22:15.506: INFO: Deleting PersistentVolume "hostpath-vnjcd" • ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:06.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:21:07.025: INFO: The status of Pod test-hostpath-type-s7q48 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:09.029: INFO: The status of Pod test-hostpath-type-s7q48 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:11.029: INFO: The status of Pod test-hostpath-type-s7q48 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:13.031: INFO: The status of Pod test-hostpath-type-s7q48 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:21:15.031: INFO: The status of Pod test-hostpath-type-s7q48 is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:21:15.034: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-2618 PodName:test-hostpath-type-s7q48 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:21:15.034: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:25.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-2618" for this suite. • [SLOW TEST:78.167 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":4,"skipped":204,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:25.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:22:25.190: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:25.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-9352" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:25.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:22:25.271: INFO: The status of Pod test-hostpath-type-5dzxs is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:22:27.275: INFO: The status of Pod test-hostpath-type-5dzxs is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:22:29.273: INFO: The status of Pod test-hostpath-type-5dzxs is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:22:29.275: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1267 PodName:test-hostpath-type-5dzxs ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:22:29.275: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:22:31.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1267" for this suite. • [SLOW TEST:6.385 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":5,"skipped":224,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:13.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvk45kt [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:01.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2533" for this suite. • [SLOW TEST:107.541 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":4,"skipped":229,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":10,"skipped":266,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:15.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745" Nov 13 05:22:19.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745 && dd if=/dev/zero of=/tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745/file] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:19.554: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:19.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:19.675: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:22:19.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745 && chmod o+rwx /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:22:19.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:22:20.015: INFO: Creating a PV followed by a PVC Nov 13 05:22:20.022: INFO: Waiting for PV local-pvcscrl to bind to PVC pvc-7nkqj Nov 13 05:22:20.022: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7nkqj] to have phase Bound Nov 13 05:22:20.024: INFO: PersistentVolumeClaim pvc-7nkqj found but phase is Pending instead of Bound. Nov 13 05:22:22.030: INFO: PersistentVolumeClaim pvc-7nkqj found but phase is Pending instead of Bound. Nov 13 05:22:24.033: INFO: PersistentVolumeClaim pvc-7nkqj found but phase is Pending instead of Bound. Nov 13 05:22:26.036: INFO: PersistentVolumeClaim pvc-7nkqj found but phase is Pending instead of Bound. Nov 13 05:22:28.040: INFO: PersistentVolumeClaim pvc-7nkqj found and phase=Bound (8.017282896s) Nov 13 05:22:28.040: INFO: Waiting up to 3m0s for PersistentVolume local-pvcscrl to have phase Bound Nov 13 05:22:28.043: INFO: PersistentVolume local-pvcscrl found and phase=Bound (3.237588ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:23:00.072: INFO: pod "pod-f0e62893-0625-4170-a719-91c64cda9fa3" created on Node "node2" STEP: Writing in pod1 Nov 13 05:23:00.072: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-f0e62893-0625-4170-a719-91c64cda9fa3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:23:00.072: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:00.147: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:23:00.147: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-f0e62893-0625-4170-a719-91c64cda9fa3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:23:00.147: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:00.226: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:23:04.251: INFO: pod "pod-d82d63ea-842d-492c-a45d-aa46752ca277" created on Node "node2" Nov 13 05:23:04.251: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-d82d63ea-842d-492c-a45d-aa46752ca277 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:23:04.251: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:04.359: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:23:04.359: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-d82d63ea-842d-492c-a45d-aa46752ca277 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:23:04.359: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:04.455: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:23:04.456: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9255 PodName:pod-f0e62893-0625-4170-a719-91c64cda9fa3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:23:04.456: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:04.537: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f0e62893-0625-4170-a719-91c64cda9fa3 in namespace persistent-local-volumes-test-9255 STEP: Deleting pod2 STEP: Deleting pod pod-d82d63ea-842d-492c-a45d-aa46752ca277 in namespace persistent-local-volumes-test-9255 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:23:04.547: INFO: Deleting PersistentVolumeClaim "pvc-7nkqj" Nov 13 05:23:04.551: INFO: Deleting PersistentVolume "local-pvcscrl" Nov 13 05:23:04.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:04.555: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:23:04.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:04.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745/file Nov 13 05:23:04.822: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:04.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745 Nov 13 05:23:04.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-52820f8e-40be-4942-a2a0-006527132745] Namespace:persistent-local-volumes-test-9255 PodName:hostexec-node2-dclqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:04.937: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:05.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9255" for this suite. • [SLOW TEST:49.554 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:05.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:23:05.163: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:05.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-7326" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:31.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:23:03.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-0653c78f-52ce-4800-83d5-39b751d08ffc && mount --bind /tmp/local-volume-test-0653c78f-52ce-4800-83d5-39b751d08ffc /tmp/local-volume-test-0653c78f-52ce-4800-83d5-39b751d08ffc] Namespace:persistent-local-volumes-test-857 PodName:hostexec-node1-ptx6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:03.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:03.808: INFO: Creating a PV followed by a PVC Nov 13 05:23:03.817: INFO: Waiting for PV local-pvqqbn2 to bind to PVC pvc-rr64c Nov 13 05:23:03.817: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rr64c] to have phase Bound Nov 13 05:23:03.820: INFO: PersistentVolumeClaim pvc-rr64c found but phase is Pending instead of Bound. Nov 13 05:23:05.823: INFO: PersistentVolumeClaim pvc-rr64c found and phase=Bound (2.006018223s) Nov 13 05:23:05.823: INFO: Waiting up to 3m0s for PersistentVolume local-pvqqbn2 to have phase Bound Nov 13 05:23:05.826: INFO: PersistentVolume local-pvqqbn2 found and phase=Bound (2.207138ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:23:05.830: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:23:05.832: INFO: Deleting PersistentVolumeClaim "pvc-rr64c" Nov 13 05:23:05.837: INFO: Deleting PersistentVolume "local-pvqqbn2" STEP: Removing the test directory Nov 13 05:23:05.841: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-0653c78f-52ce-4800-83d5-39b751d08ffc && rm -r /tmp/local-volume-test-0653c78f-52ce-4800-83d5-39b751d08ffc] Namespace:persistent-local-volumes-test-857 PodName:hostexec-node1-ptx6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:05.841: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:06.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-857" for this suite. S [SKIPPING] [34.382 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:01.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Nov 13 05:23:01.395: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-92" to be "Succeeded or Failed" Nov 13 05:23:01.401: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.980671ms Nov 13 05:23:03.407: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011658041s Nov 13 05:23:05.410: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015118978s Nov 13 05:23:07.424: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028272207s STEP: Saw pod success Nov 13 05:23:07.424: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:23:07.426: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-1: STEP: delete the pod Nov 13 05:23:07.445: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:23:07.447: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:07.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-92" for this suite. • [SLOW TEST:6.096 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:22:09.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-478 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:22:09.380: INFO: creating *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-attacher Nov 13 05:22:09.383: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-478 Nov 13 05:22:09.383: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-478 Nov 13 05:22:09.386: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-478 Nov 13 05:22:09.388: INFO: creating *v1.Role: csi-mock-volumes-478-1537/external-attacher-cfg-csi-mock-volumes-478 Nov 13 05:22:09.391: INFO: creating *v1.RoleBinding: csi-mock-volumes-478-1537/csi-attacher-role-cfg Nov 13 05:22:09.394: INFO: creating *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-provisioner Nov 13 05:22:09.397: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-478 Nov 13 05:22:09.397: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-478 Nov 13 05:22:09.399: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-478 Nov 13 05:22:09.402: INFO: creating *v1.Role: csi-mock-volumes-478-1537/external-provisioner-cfg-csi-mock-volumes-478 Nov 13 05:22:09.405: INFO: creating *v1.RoleBinding: csi-mock-volumes-478-1537/csi-provisioner-role-cfg Nov 13 05:22:09.408: INFO: creating *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-resizer Nov 13 05:22:09.411: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-478 Nov 13 05:22:09.411: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-478 Nov 13 05:22:09.414: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-478 Nov 13 05:22:09.419: INFO: creating *v1.Role: csi-mock-volumes-478-1537/external-resizer-cfg-csi-mock-volumes-478 Nov 13 05:22:09.422: INFO: creating *v1.RoleBinding: csi-mock-volumes-478-1537/csi-resizer-role-cfg Nov 13 05:22:09.425: INFO: creating *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-snapshotter Nov 13 05:22:09.427: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-478 Nov 13 05:22:09.427: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-478 Nov 13 05:22:09.430: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-478 Nov 13 05:22:09.433: INFO: creating *v1.Role: csi-mock-volumes-478-1537/external-snapshotter-leaderelection-csi-mock-volumes-478 Nov 13 05:22:09.436: INFO: creating *v1.RoleBinding: csi-mock-volumes-478-1537/external-snapshotter-leaderelection Nov 13 05:22:09.439: INFO: creating *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-mock Nov 13 05:22:09.441: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-478 Nov 13 05:22:09.445: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-478 Nov 13 05:22:09.448: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-478 Nov 13 05:22:09.451: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-478 Nov 13 05:22:09.453: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-478 Nov 13 05:22:09.456: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-478 Nov 13 05:22:09.458: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-478 Nov 13 05:22:09.461: INFO: creating *v1.StatefulSet: csi-mock-volumes-478-1537/csi-mockplugin Nov 13 05:22:09.465: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-478 Nov 13 05:22:09.468: INFO: creating *v1.StatefulSet: csi-mock-volumes-478-1537/csi-mockplugin-resizer Nov 13 05:22:09.472: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-478" Nov 13 05:22:09.474: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-478 to register on node node2 STEP: Creating pod Nov 13 05:22:18.990: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:22:19.008: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z99gg] to have phase Bound Nov 13 05:22:19.010: INFO: PersistentVolumeClaim pvc-z99gg found but phase is Pending instead of Bound. Nov 13 05:22:21.015: INFO: PersistentVolumeClaim pvc-z99gg found and phase=Bound (2.007069493s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-22ll5 Nov 13 05:22:27.054: INFO: Deleting pod "pvc-volume-tester-22ll5" in namespace "csi-mock-volumes-478" Nov 13 05:22:27.060: INFO: Wait up to 5m0s for pod "pvc-volume-tester-22ll5" to be fully deleted STEP: Deleting claim pvc-z99gg Nov 13 05:23:13.071: INFO: Waiting up to 2m0s for PersistentVolume pvc-5edba53f-440d-48db-b662-12125ede6cbd to get deleted Nov 13 05:23:13.073: INFO: PersistentVolume pvc-5edba53f-440d-48db-b662-12125ede6cbd found and phase=Bound (1.909604ms) Nov 13 05:23:15.077: INFO: PersistentVolume pvc-5edba53f-440d-48db-b662-12125ede6cbd was removed STEP: Deleting storageclass csi-mock-volumes-478-sc7mnts STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-478 STEP: Waiting for namespaces [csi-mock-volumes-478] to vanish STEP: uninstalling csi mock driver Nov 13 05:23:21.090: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-attacher Nov 13 05:23:21.094: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-478 Nov 13 05:23:21.097: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-478 Nov 13 05:23:21.101: INFO: deleting *v1.Role: csi-mock-volumes-478-1537/external-attacher-cfg-csi-mock-volumes-478 Nov 13 05:23:21.104: INFO: deleting *v1.RoleBinding: csi-mock-volumes-478-1537/csi-attacher-role-cfg Nov 13 05:23:21.109: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-provisioner Nov 13 05:23:21.113: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-478 Nov 13 05:23:21.116: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-478 Nov 13 05:23:21.119: INFO: deleting *v1.Role: csi-mock-volumes-478-1537/external-provisioner-cfg-csi-mock-volumes-478 Nov 13 05:23:21.122: INFO: deleting *v1.RoleBinding: csi-mock-volumes-478-1537/csi-provisioner-role-cfg Nov 13 05:23:21.126: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-resizer Nov 13 05:23:21.129: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-478 Nov 13 05:23:21.132: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-478 Nov 13 05:23:21.137: INFO: deleting *v1.Role: csi-mock-volumes-478-1537/external-resizer-cfg-csi-mock-volumes-478 Nov 13 05:23:21.140: INFO: deleting *v1.RoleBinding: csi-mock-volumes-478-1537/csi-resizer-role-cfg Nov 13 05:23:21.143: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-snapshotter Nov 13 05:23:21.147: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-478 Nov 13 05:23:21.151: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-478 Nov 13 05:23:21.154: INFO: deleting *v1.Role: csi-mock-volumes-478-1537/external-snapshotter-leaderelection-csi-mock-volumes-478 Nov 13 05:23:21.159: INFO: deleting *v1.RoleBinding: csi-mock-volumes-478-1537/external-snapshotter-leaderelection Nov 13 05:23:21.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-478-1537/csi-mock Nov 13 05:23:21.165: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-478 Nov 13 05:23:21.169: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-478 Nov 13 05:23:21.172: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-478 Nov 13 05:23:21.175: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-478 Nov 13 05:23:21.179: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-478 Nov 13 05:23:21.182: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-478 Nov 13 05:23:21.185: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-478 Nov 13 05:23:21.188: INFO: deleting *v1.StatefulSet: csi-mock-volumes-478-1537/csi-mockplugin Nov 13 05:23:21.193: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-478 Nov 13 05:23:21.196: INFO: deleting *v1.StatefulSet: csi-mock-volumes-478-1537/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-478-1537 STEP: Waiting for namespaces [csi-mock-volumes-478-1537] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:33.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:83.901 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":6,"skipped":217,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:33.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:23:33.281: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:33.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8329" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:20:37.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-8164 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:20:37.782: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-attacher Nov 13 05:20:37.785: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8164 Nov 13 05:20:37.785: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8164 Nov 13 05:20:37.788: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8164 Nov 13 05:20:37.791: INFO: creating *v1.Role: csi-mock-volumes-8164-6347/external-attacher-cfg-csi-mock-volumes-8164 Nov 13 05:20:37.794: INFO: creating *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-attacher-role-cfg Nov 13 05:20:37.799: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-provisioner Nov 13 05:20:37.802: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8164 Nov 13 05:20:37.802: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8164 Nov 13 05:20:37.804: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8164 Nov 13 05:20:37.807: INFO: creating *v1.Role: csi-mock-volumes-8164-6347/external-provisioner-cfg-csi-mock-volumes-8164 Nov 13 05:20:37.810: INFO: creating *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-provisioner-role-cfg Nov 13 05:20:37.813: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-resizer Nov 13 05:20:37.815: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8164 Nov 13 05:20:37.815: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8164 Nov 13 05:20:37.817: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8164 Nov 13 05:20:37.821: INFO: creating *v1.Role: csi-mock-volumes-8164-6347/external-resizer-cfg-csi-mock-volumes-8164 Nov 13 05:20:37.823: INFO: creating *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-resizer-role-cfg Nov 13 05:20:37.827: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-snapshotter Nov 13 05:20:37.829: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8164 Nov 13 05:20:37.829: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8164 Nov 13 05:20:37.832: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8164 Nov 13 05:20:37.835: INFO: creating *v1.Role: csi-mock-volumes-8164-6347/external-snapshotter-leaderelection-csi-mock-volumes-8164 Nov 13 05:20:37.837: INFO: creating *v1.RoleBinding: csi-mock-volumes-8164-6347/external-snapshotter-leaderelection Nov 13 05:20:37.840: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-mock Nov 13 05:20:37.842: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8164 Nov 13 05:20:37.845: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8164 Nov 13 05:20:37.849: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8164 Nov 13 05:20:37.851: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8164 Nov 13 05:20:37.853: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8164 Nov 13 05:20:37.856: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8164 Nov 13 05:20:37.858: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8164 Nov 13 05:20:37.861: INFO: creating *v1.StatefulSet: csi-mock-volumes-8164-6347/csi-mockplugin Nov 13 05:20:37.865: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8164 Nov 13 05:20:37.868: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8164" Nov 13 05:20:37.870: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8164 to register on node node1 I1113 05:20:48.965257 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8164","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:20:49.046800 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:20:49.088583 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8164","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:20:49.090163 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:20:49.092062 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:20:49.664864 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8164"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:20:54.142: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:20:54.147: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4d9tf] to have phase Bound Nov 13 05:20:54.148: INFO: PersistentVolumeClaim pvc-4d9tf found but phase is Pending instead of Bound. I1113 05:20:54.153708 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a21d2f1e-88f2-4777-b550-f4832438f156","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:20:54.155140 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a21d2f1e-88f2-4777-b550-f4832438f156","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-a21d2f1e-88f2-4777-b550-f4832438f156"}}},"Error":"","FullError":null} Nov 13 05:20:56.152: INFO: PersistentVolumeClaim pvc-4d9tf found and phase=Bound (2.005785553s) I1113 05:20:57.803609 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:20:57.806: INFO: >>> kubeConfig: /root/.kube/config I1113 05:20:57.906374 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a21d2f1e-88f2-4777-b550-f4832438f156/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a21d2f1e-88f2-4777-b550-f4832438f156","storage.kubernetes.io/csiProvisionerIdentity":"1636780849090-8081-csi-mock-csi-mock-volumes-8164"}},"Response":{},"Error":"","FullError":null} I1113 05:20:57.910563 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:20:57.912: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:57.999: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:20:58.145: INFO: >>> kubeConfig: /root/.kube/config I1113 05:20:58.254429 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a21d2f1e-88f2-4777-b550-f4832438f156/globalmount","target_path":"/var/lib/kubelet/pods/4d5d9243-ccd4-4f3c-881f-88700d8327bc/volumes/kubernetes.io~csi/pvc-a21d2f1e-88f2-4777-b550-f4832438f156/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a21d2f1e-88f2-4777-b550-f4832438f156","storage.kubernetes.io/csiProvisionerIdentity":"1636780849090-8081-csi-mock-csi-mock-volumes-8164"}},"Response":{},"Error":"","FullError":null} Nov 13 05:21:06.173: INFO: Deleting pod "pvc-volume-tester-vmkcj" in namespace "csi-mock-volumes-8164" Nov 13 05:21:06.178: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vmkcj" to be fully deleted Nov 13 05:21:11.526: INFO: >>> kubeConfig: /root/.kube/config I1113 05:21:11.628188 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/4d5d9243-ccd4-4f3c-881f-88700d8327bc/volumes/kubernetes.io~csi/pvc-a21d2f1e-88f2-4777-b550-f4832438f156/mount"},"Response":{},"Error":"","FullError":null} I1113 05:21:11.731070 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:21:11.732906 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a21d2f1e-88f2-4777-b550-f4832438f156/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:21:26.646008 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:21:27.190: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"177880", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001be7008), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001be7020)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004e867d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004e867e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:21:27.190: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"177881", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8164"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0047ab6e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0047ab6f8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0047ab710), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0047ab728)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004e77a10), VolumeMode:(*v1.PersistentVolumeMode)(0xc004e77a20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:21:27.191: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"177887", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8164"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d5b6b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d5b6c8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d5b6e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d5b6f8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a21d2f1e-88f2-4777-b550-f4832438f156", StorageClassName:(*string)(0xc004e9ea60), VolumeMode:(*v1.PersistentVolumeMode)(0xc004e9ea70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:21:27.191: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"177890", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8164"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b28468), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b28480)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b28498), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b284b0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a21d2f1e-88f2-4777-b550-f4832438f156", StorageClassName:(*string)(0xc004d42280), VolumeMode:(*v1.PersistentVolumeMode)(0xc004d42290), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:21:27.191: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"178985", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004b284e0), DeletionGracePeriodSeconds:(*int64)(0xc0031ce498), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8164"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b284f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b28510)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004b28528), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004b28540)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a21d2f1e-88f2-4777-b550-f4832438f156", StorageClassName:(*string)(0xc004d422d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004d422e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:21:27.191: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4d9tf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8164", SelfLink:"", UID:"a21d2f1e-88f2-4777-b550-f4832438f156", ResourceVersion:"178986", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377654, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004582ea0), DeletionGracePeriodSeconds:(*int64)(0xc0048bd038), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8164"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582eb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582ed0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582ee8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582f00)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a21d2f1e-88f2-4777-b550-f4832438f156", StorageClassName:(*string)(0xc004b048a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004b048b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-vmkcj Nov 13 05:21:27.191: INFO: Deleting pod "pvc-volume-tester-vmkcj" in namespace "csi-mock-volumes-8164" STEP: Deleting claim pvc-4d9tf STEP: Deleting storageclass csi-mock-volumes-8164-scv5km6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8164 STEP: Waiting for namespaces [csi-mock-volumes-8164] to vanish STEP: uninstalling csi mock driver Nov 13 05:21:33.738: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-attacher Nov 13 05:21:33.741: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8164 Nov 13 05:21:33.745: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8164 Nov 13 05:21:33.748: INFO: deleting *v1.Role: csi-mock-volumes-8164-6347/external-attacher-cfg-csi-mock-volumes-8164 Nov 13 05:21:33.753: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-attacher-role-cfg Nov 13 05:21:33.757: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-provisioner Nov 13 05:21:33.761: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8164 Nov 13 05:21:33.765: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8164 Nov 13 05:21:33.769: INFO: deleting *v1.Role: csi-mock-volumes-8164-6347/external-provisioner-cfg-csi-mock-volumes-8164 Nov 13 05:21:33.772: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-provisioner-role-cfg Nov 13 05:21:33.776: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-resizer Nov 13 05:21:33.779: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8164 Nov 13 05:21:33.782: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8164 Nov 13 05:21:33.787: INFO: deleting *v1.Role: csi-mock-volumes-8164-6347/external-resizer-cfg-csi-mock-volumes-8164 Nov 13 05:21:33.790: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8164-6347/csi-resizer-role-cfg Nov 13 05:21:33.793: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-snapshotter Nov 13 05:21:33.797: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8164 Nov 13 05:21:33.800: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8164 Nov 13 05:21:33.803: INFO: deleting *v1.Role: csi-mock-volumes-8164-6347/external-snapshotter-leaderelection-csi-mock-volumes-8164 Nov 13 05:21:33.808: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8164-6347/external-snapshotter-leaderelection Nov 13 05:21:33.811: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8164-6347/csi-mock Nov 13 05:21:33.815: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8164 Nov 13 05:21:33.818: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8164 Nov 13 05:21:33.822: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8164 Nov 13 05:21:33.825: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8164 Nov 13 05:21:33.829: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8164 Nov 13 05:21:33.833: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8164 Nov 13 05:21:33.836: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8164 Nov 13 05:21:33.842: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8164-6347/csi-mockplugin Nov 13 05:21:33.846: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8164 STEP: deleting the driver namespace: csi-mock-volumes-8164-6347 STEP: Waiting for namespaces [csi-mock-volumes-8164-6347] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:41.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:184.155 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":2,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:05.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-5389 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:23:05.300: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-attacher Nov 13 05:23:05.303: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5389 Nov 13 05:23:05.303: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5389 Nov 13 05:23:05.305: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5389 Nov 13 05:23:05.308: INFO: creating *v1.Role: csi-mock-volumes-5389-7083/external-attacher-cfg-csi-mock-volumes-5389 Nov 13 05:23:05.310: INFO: creating *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-attacher-role-cfg Nov 13 05:23:05.313: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-provisioner Nov 13 05:23:05.316: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5389 Nov 13 05:23:05.316: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5389 Nov 13 05:23:05.319: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5389 Nov 13 05:23:05.322: INFO: creating *v1.Role: csi-mock-volumes-5389-7083/external-provisioner-cfg-csi-mock-volumes-5389 Nov 13 05:23:05.325: INFO: creating *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-provisioner-role-cfg Nov 13 05:23:05.327: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-resizer Nov 13 05:23:05.330: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5389 Nov 13 05:23:05.330: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5389 Nov 13 05:23:05.333: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5389 Nov 13 05:23:05.335: INFO: creating *v1.Role: csi-mock-volumes-5389-7083/external-resizer-cfg-csi-mock-volumes-5389 Nov 13 05:23:05.339: INFO: creating *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-resizer-role-cfg Nov 13 05:23:05.341: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-snapshotter Nov 13 05:23:05.344: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5389 Nov 13 05:23:05.344: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5389 Nov 13 05:23:05.347: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5389 Nov 13 05:23:05.350: INFO: creating *v1.Role: csi-mock-volumes-5389-7083/external-snapshotter-leaderelection-csi-mock-volumes-5389 Nov 13 05:23:05.353: INFO: creating *v1.RoleBinding: csi-mock-volumes-5389-7083/external-snapshotter-leaderelection Nov 13 05:23:05.356: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-mock Nov 13 05:23:05.358: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5389 Nov 13 05:23:05.363: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5389 Nov 13 05:23:05.366: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5389 Nov 13 05:23:05.370: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5389 Nov 13 05:23:05.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5389 Nov 13 05:23:05.376: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5389 Nov 13 05:23:05.379: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5389 Nov 13 05:23:05.381: INFO: creating *v1.StatefulSet: csi-mock-volumes-5389-7083/csi-mockplugin Nov 13 05:23:05.386: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5389 Nov 13 05:23:05.388: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5389" Nov 13 05:23:05.390: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5389 to register on node node2 STEP: Creating pod Nov 13 05:23:14.935: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:23:14.939: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9pkzx] to have phase Bound Nov 13 05:23:14.941: INFO: PersistentVolumeClaim pvc-9pkzx found but phase is Pending instead of Bound. Nov 13 05:23:16.945: INFO: PersistentVolumeClaim pvc-9pkzx found and phase=Bound (2.006372787s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-xs9sb Nov 13 05:23:20.973: INFO: Deleting pod "pvc-volume-tester-xs9sb" in namespace "csi-mock-volumes-5389" Nov 13 05:23:20.977: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xs9sb" to be fully deleted STEP: Deleting claim pvc-9pkzx Nov 13 05:23:32.992: INFO: Waiting up to 2m0s for PersistentVolume pvc-5a1a1ce5-d4f4-44d3-9af1-71a89514b86d to get deleted Nov 13 05:23:32.994: INFO: PersistentVolume pvc-5a1a1ce5-d4f4-44d3-9af1-71a89514b86d found and phase=Bound (1.836903ms) Nov 13 05:23:34.996: INFO: PersistentVolume pvc-5a1a1ce5-d4f4-44d3-9af1-71a89514b86d was removed STEP: Deleting storageclass csi-mock-volumes-5389-scpn2jl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5389 STEP: Waiting for namespaces [csi-mock-volumes-5389] to vanish STEP: uninstalling csi mock driver Nov 13 05:23:41.008: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-attacher Nov 13 05:23:41.014: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5389 Nov 13 05:23:41.018: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5389 Nov 13 05:23:41.021: INFO: deleting *v1.Role: csi-mock-volumes-5389-7083/external-attacher-cfg-csi-mock-volumes-5389 Nov 13 05:23:41.024: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-attacher-role-cfg Nov 13 05:23:41.028: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-provisioner Nov 13 05:23:41.032: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5389 Nov 13 05:23:41.035: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5389 Nov 13 05:23:41.039: INFO: deleting *v1.Role: csi-mock-volumes-5389-7083/external-provisioner-cfg-csi-mock-volumes-5389 Nov 13 05:23:41.046: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-provisioner-role-cfg Nov 13 05:23:41.055: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-resizer Nov 13 05:23:41.063: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5389 Nov 13 05:23:41.068: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5389 Nov 13 05:23:41.071: INFO: deleting *v1.Role: csi-mock-volumes-5389-7083/external-resizer-cfg-csi-mock-volumes-5389 Nov 13 05:23:41.075: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5389-7083/csi-resizer-role-cfg Nov 13 05:23:41.079: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-snapshotter Nov 13 05:23:41.083: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5389 Nov 13 05:23:41.087: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5389 Nov 13 05:23:41.090: INFO: deleting *v1.Role: csi-mock-volumes-5389-7083/external-snapshotter-leaderelection-csi-mock-volumes-5389 Nov 13 05:23:41.094: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5389-7083/external-snapshotter-leaderelection Nov 13 05:23:41.097: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5389-7083/csi-mock Nov 13 05:23:41.101: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5389 Nov 13 05:23:41.104: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5389 Nov 13 05:23:41.107: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5389 Nov 13 05:23:41.111: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5389 Nov 13 05:23:41.114: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5389 Nov 13 05:23:41.117: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5389 Nov 13 05:23:41.120: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5389 Nov 13 05:23:41.123: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5389-7083/csi-mockplugin Nov 13 05:23:41.127: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5389 STEP: deleting the driver namespace: csi-mock-volumes-5389-7083 STEP: Waiting for namespaces [csi-mock-volumes-5389-7083] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:47.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:41.916 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":12,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:47.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:23:47.215: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8545" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:41.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:23:41.956: INFO: The status of Pod test-hostpath-type-gc6hx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:23:43.960: INFO: The status of Pod test-hostpath-type-gc6hx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:23:45.960: INFO: The status of Pod test-hostpath-type-gc6hx is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:47.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-3983" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":3,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:00.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-7456 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:21:00.705: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-attacher Nov 13 05:21:00.709: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7456 Nov 13 05:21:00.709: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7456 Nov 13 05:21:00.712: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7456 Nov 13 05:21:00.715: INFO: creating *v1.Role: csi-mock-volumes-7456-3681/external-attacher-cfg-csi-mock-volumes-7456 Nov 13 05:21:00.718: INFO: creating *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-attacher-role-cfg Nov 13 05:21:00.721: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-provisioner Nov 13 05:21:00.723: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7456 Nov 13 05:21:00.723: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7456 Nov 13 05:21:00.726: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7456 Nov 13 05:21:00.728: INFO: creating *v1.Role: csi-mock-volumes-7456-3681/external-provisioner-cfg-csi-mock-volumes-7456 Nov 13 05:21:00.731: INFO: creating *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-provisioner-role-cfg Nov 13 05:21:00.734: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-resizer Nov 13 05:21:00.736: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7456 Nov 13 05:21:00.736: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7456 Nov 13 05:21:00.739: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7456 Nov 13 05:21:00.742: INFO: creating *v1.Role: csi-mock-volumes-7456-3681/external-resizer-cfg-csi-mock-volumes-7456 Nov 13 05:21:00.744: INFO: creating *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-resizer-role-cfg Nov 13 05:21:00.747: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-snapshotter Nov 13 05:21:00.750: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7456 Nov 13 05:21:00.750: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7456 Nov 13 05:21:00.752: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7456 Nov 13 05:21:00.755: INFO: creating *v1.Role: csi-mock-volumes-7456-3681/external-snapshotter-leaderelection-csi-mock-volumes-7456 Nov 13 05:21:00.757: INFO: creating *v1.RoleBinding: csi-mock-volumes-7456-3681/external-snapshotter-leaderelection Nov 13 05:21:00.760: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-mock Nov 13 05:21:00.763: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7456 Nov 13 05:21:00.765: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7456 Nov 13 05:21:00.768: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7456 Nov 13 05:21:00.771: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7456 Nov 13 05:21:00.773: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7456 Nov 13 05:21:00.776: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7456 Nov 13 05:21:00.778: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7456 Nov 13 05:21:00.781: INFO: creating *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin Nov 13 05:21:00.786: INFO: creating *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin-attacher Nov 13 05:21:00.790: INFO: creating *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin-resizer Nov 13 05:21:00.794: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7456 to register on node node1 STEP: Creating pod Nov 13 05:21:17.064: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:21:17.069: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vpp26] to have phase Bound Nov 13 05:21:17.071: INFO: PersistentVolumeClaim pvc-vpp26 found but phase is Pending instead of Bound. Nov 13 05:21:19.074: INFO: PersistentVolumeClaim pvc-vpp26 found and phase=Bound (2.005621758s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-9c8xb Nov 13 05:23:09.111: INFO: Deleting pod "pvc-volume-tester-9c8xb" in namespace "csi-mock-volumes-7456" Nov 13 05:23:09.115: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9c8xb" to be fully deleted STEP: Deleting claim pvc-vpp26 Nov 13 05:23:39.129: INFO: Waiting up to 2m0s for PersistentVolume pvc-08713955-d615-4f18-bdaf-9590965fc9e4 to get deleted Nov 13 05:23:39.132: INFO: PersistentVolume pvc-08713955-d615-4f18-bdaf-9590965fc9e4 found and phase=Bound (3.060073ms) Nov 13 05:23:41.135: INFO: PersistentVolume pvc-08713955-d615-4f18-bdaf-9590965fc9e4 was removed STEP: Deleting storageclass csi-mock-volumes-7456-sc8qmbf STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7456 STEP: Waiting for namespaces [csi-mock-volumes-7456] to vanish STEP: uninstalling csi mock driver Nov 13 05:23:47.147: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-attacher Nov 13 05:23:47.151: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7456 Nov 13 05:23:47.155: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7456 Nov 13 05:23:47.158: INFO: deleting *v1.Role: csi-mock-volumes-7456-3681/external-attacher-cfg-csi-mock-volumes-7456 Nov 13 05:23:47.161: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-attacher-role-cfg Nov 13 05:23:47.165: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-provisioner Nov 13 05:23:47.169: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7456 Nov 13 05:23:47.172: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7456 Nov 13 05:23:47.176: INFO: deleting *v1.Role: csi-mock-volumes-7456-3681/external-provisioner-cfg-csi-mock-volumes-7456 Nov 13 05:23:47.179: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-provisioner-role-cfg Nov 13 05:23:47.183: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-resizer Nov 13 05:23:47.186: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7456 Nov 13 05:23:47.189: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7456 Nov 13 05:23:47.193: INFO: deleting *v1.Role: csi-mock-volumes-7456-3681/external-resizer-cfg-csi-mock-volumes-7456 Nov 13 05:23:47.196: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7456-3681/csi-resizer-role-cfg Nov 13 05:23:47.199: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-snapshotter Nov 13 05:23:47.203: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7456 Nov 13 05:23:47.206: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7456 Nov 13 05:23:47.210: INFO: deleting *v1.Role: csi-mock-volumes-7456-3681/external-snapshotter-leaderelection-csi-mock-volumes-7456 Nov 13 05:23:47.213: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7456-3681/external-snapshotter-leaderelection Nov 13 05:23:47.221: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7456-3681/csi-mock Nov 13 05:23:47.227: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7456 Nov 13 05:23:47.234: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7456 Nov 13 05:23:47.237: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7456 Nov 13 05:23:47.240: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7456 Nov 13 05:23:47.244: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7456 Nov 13 05:23:47.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7456 Nov 13 05:23:47.250: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7456 Nov 13 05:23:47.253: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin Nov 13 05:23:47.256: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin-attacher Nov 13 05:23:47.260: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7456-3681/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-7456-3681 STEP: Waiting for namespaces [csi-mock-volumes-7456-3681] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:59.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:178.640 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":6,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:59.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:23:59.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2429" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":7,"skipped":255,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:07.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:23:27.649: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend && mount --bind /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend && ln -s /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node1-s7chx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:27.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:27.913: INFO: Creating a PV followed by a PVC Nov 13 05:23:27.920: INFO: Waiting for PV local-pvdlt74 to bind to PVC pvc-q9rlb Nov 13 05:23:27.920: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-q9rlb] to have phase Bound Nov 13 05:23:27.922: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:29.925: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:31.929: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:33.933: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:35.937: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:37.940: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:39.944: INFO: PersistentVolumeClaim pvc-q9rlb found but phase is Pending instead of Bound. Nov 13 05:23:41.950: INFO: PersistentVolumeClaim pvc-q9rlb found and phase=Bound (14.03026646s) Nov 13 05:23:41.950: INFO: Waiting up to 3m0s for PersistentVolume local-pvdlt74 to have phase Bound Nov 13 05:23:41.953: INFO: PersistentVolume local-pvdlt74 found and phase=Bound (2.88001ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:23:53.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2227 exec pod-a6515418-d3fe-4053-a92e-27f852b4fa4d --namespace=persistent-local-volumes-test-2227 -- stat -c %g /mnt/volume1' Nov 13 05:23:54.514: INFO: stderr: "" Nov 13 05:23:54.514: INFO: stdout: "1000\n" Nov 13 05:23:56.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2227 exec pod-a6515418-d3fe-4053-a92e-27f852b4fa4d --namespace=persistent-local-volumes-test-2227 -- stat -c %g /mnt/volume1' Nov 13 05:23:56.799: INFO: stderr: "" Nov 13 05:23:56.799: INFO: stdout: "1000\n" Nov 13 05:23:58.800: FAIL: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-a6515418-d3fe-4053-a92e-27f852b4fa4d Unexpected error: <*errors.errorString | 0xc0049df550>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.createPodWithFsGroupTest(0xc003cb2120, 0xc00530a120, 0x4d2, 0x4d2, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 +0x317 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:277 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00044af00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00044af00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00044af00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:23:58.802: INFO: Deleting PersistentVolumeClaim "pvc-q9rlb" Nov 13 05:23:58.807: INFO: Deleting PersistentVolume "local-pvdlt74" STEP: Removing the test directory Nov 13 05:23:58.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d && umount /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend && rm -r /tmp/local-volume-test-b59f4740-4d21-48f9-97c1-3b39a190426d-backend] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node1-s7chx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:58.811: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-2227". STEP: Found 12 events. Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:07 +0000 UTC - event for hostexec-node1-s7chx: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2227/hostexec-node1-s7chx to node1 Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:15 +0000 UTC - event for hostexec-node1-s7chx: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:16 +0000 UTC - event for hostexec-node1-s7chx: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 309.715204ms Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:18 +0000 UTC - event for hostexec-node1-s7chx: {kubelet node1} Created: Created container agnhost-container Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:25 +0000 UTC - event for hostexec-node1-s7chx: {kubelet node1} Started: Started container agnhost-container Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:27 +0000 UTC - event for pvc-q9rlb: {persistentvolume-controller } ProvisioningFailed: no volume plugin matched name: kubernetes.io/no-provisioner Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:41 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2227/pod-a6515418-d3fe-4053-a92e-27f852b4fa4d to node1 Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:42 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {kubelet node1} AlreadyMountedVolume: The requested fsGroup is 1234, but the volume local-pvdlt74 has GID 1000. The volume may not be shareable. Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:44 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:44 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 326.326304ms Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:44 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {kubelet node1} Created: Created container write-pod Nov 13 05:23:59.056: INFO: At 2021-11-13 05:23:45 +0000 UTC - event for pod-a6515418-d3fe-4053-a92e-27f852b4fa4d: {kubelet node1} Started: Started container write-pod Nov 13 05:23:59.059: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 05:23:59.059: INFO: hostexec-node1-s7chx node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:07 +0000 UTC }] Nov 13 05:23:59.059: INFO: pod-a6515418-d3fe-4053-a92e-27f852b4fa4d node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:41 +0000 UTC }] Nov 13 05:23:59.059: INFO: Nov 13 05:23:59.064: INFO: Logging node info for node master1 Nov 13 05:23:59.067: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 182131 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:23:59.068: INFO: Logging kubelet events for node master1 Nov 13 05:23:59.069: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 05:23:59.093: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:23:59.093: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:23:59.093: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:23:59.093: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:23:59.093: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container coredns ready: true, restart count 2 Nov 13 05:23:59.093: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.093: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:23:59.093: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 05:23:59.093: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:23:59.093: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.093: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:23:59.093: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.093: INFO: Container docker-registry ready: true, restart count 0 Nov 13 05:23:59.093: INFO: Container nginx ready: true, restart count 0 W1113 05:23:59.107553 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:23:59.177: INFO: Latency metrics for node master1 Nov 13 05:23:59.177: INFO: Logging node info for node master2 Nov 13 05:23:59.180: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 182124 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:23:59.181: INFO: Logging kubelet events for node master2 Nov 13 05:23:59.183: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 05:23:59.199: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:23:59.199: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 05:23:59.199: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container coredns ready: true, restart count 1 Nov 13 05:23:59.199: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.199: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:23:59.199: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:23:59.199: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:23:59.199: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:23:59.199: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:23:59.199: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:23:59.199: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.199: INFO: Container kube-multus ready: true, restart count 1 W1113 05:23:59.210978 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:23:59.276: INFO: Latency metrics for node master2 Nov 13 05:23:59.276: INFO: Logging node info for node master3 Nov 13 05:23:59.279: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 182102 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:23:59.279: INFO: Logging kubelet events for node master3 Nov 13 05:23:59.282: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 05:23:59.296: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:23:59.296: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:23:59.296: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:23:59.296: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:23:59.296: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container autoscaler ready: true, restart count 1 Nov 13 05:23:59.296: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:23:59.296: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 05:23:59.296: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:23:59.296: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.296: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.296: INFO: Container node-exporter ready: true, restart count 0 W1113 05:23:59.310307 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:23:59.373: INFO: Latency metrics for node master3 Nov 13 05:23:59.373: INFO: Logging node info for node node1 Nov 13 05:23:59.376: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 182050 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kube-controller-manager Update v1 2021-11-13 05:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-11-13 05:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:23:59.376: INFO: Logging kubelet events for node node1 Nov 13 05:23:59.378: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 05:23:59.413: INFO: pod-354df04d-81bc-4986-80ed-ca724a631a0c started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-6a5b7a0b-9d1e-494c-97c8-da2d85248b24 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-a10b4c10-41dd-4794-b78e-1d660723b244 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-1eef5635-2792-474c-8edd-e95d182d0b7d started at 2021-11-13 05:21:23 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-5a2d4b30-c1d4-40bc-a2aa-59b7e4d5ebe5 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-00022fe9-d7cf-4be9-bc2a-eaed24c21128 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-3ea59bce-e6a1-4fb1-be0a-7370baa82eaa started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-8f37ef0f-678f-4111-b3f2-4e54a665e006 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-7fc138d7-68a6-4efa-a93c-79922fc3b7bb started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:23:59.413: INFO: pod-3d5261fb-539a-4d29-9486-3df54bdbf1de started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-beb82e7e-ce27-446a-87bd-6e8ed9d9b85f started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-67e33396-4bde-4c4c-85d8-aa08ee2aaaf9 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-97d0a0d6-04de-4699-a9d9-c44abedec6ec started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-59d9a59c-366c-4337-b21f-4798f16e198c started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:23:59.413: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:23:59.413: INFO: pod-262cd08c-7727-4434-af6a-1e28d2fa0e16 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-07e30ec0-2756-4ba1-a142-a4cd64c132c6 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-f15180ea-7649-4d3c-9382-582158bdde9d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-c93822fc-470c-4a4d-9431-b7f48f05799c started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-8a0c0b8e-9c90-4bf6-a64b-780e586cea23 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 05:23:59.413: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:23:59.413: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:23:59.413: INFO: Container grafana ready: true, restart count 0 Nov 13 05:23:59.413: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:23:59.413: INFO: pod-cc632c31-f462-4558-a98a-4546dffa2bcc started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-129d6331-48b2-4427-b9d9-6cecc1ff842c started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-40a6ee43-a697-40af-a074-0cd29947a8a8 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-f1372c29-5244-4105-8181-3cddfa51dd11 started at 2021-11-13 05:23:04 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:23:59.413: INFO: hostexec-node1-s7chx started at 2021-11-13 05:23:07 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:23:59.413: INFO: pod-115a6904-c055-4883-bf97-b0c305bad80f started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-561822b3-4bbf-4170-976a-f6527a130261 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-9043b585-f827-4e2d-8add-a25ce45c8f36 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: pod-f5299b92-be84-4e15-a68c-76730850b24a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.413: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:23:59.413: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:23:59.413: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:23:59.413: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.413: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:23:59.413: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-8bfd123f-86b8-424f-a799-9f7beefe593d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: hostexec-node1-7mrvg started at 2021-11-13 05:23:33 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-d499fcdf-b3d1-4ca9-a841-5566a5f68a76 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-e76b572a-7140-414b-ab58-c0131097bc1d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.414: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.414: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-92f181d7-4a60-4286-82ad-cfba09f806dc started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-3874376b-734a-4fab-9d6f-e715ed1bc840 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-677cb1b1-fd1d-4d05-81e1-ccf5a06b5008 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.414: INFO: Container collectd ready: true, restart count 0 Nov 13 05:23:59.414: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:23:59.414: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-b357e320-2d00-4951-bf55-22e62ee10654 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-b324c0a2-79dc-4956-8f82-013475f3f69a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-ac7b74a8-fd8a-4e41-8ccf-39c72291dd29 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-11f15c16-9aa4-4259-bc85-0a8be022f99a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-9c85862e-8ea9-4f74-9b76-749af1c9f54d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b started at 2021-11-13 05:23:57 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.414: INFO: Container discover ready: false, restart count 0 Nov 13 05:23:59.414: INFO: Container init ready: false, restart count 0 Nov 13 05:23:59.414: INFO: Container install ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-a6515418-d3fe-4053-a92e-27f852b4fa4d started at 2021-11-13 05:23:41 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-a3b9ad95-b131-4725-8f68-e3905f1c5326 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-3f5ccdf5-fe1b-4bdf-a15a-f3809def4a6d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-c1f03d55-de00-4602-bf38-9905db9baf2a started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-4bf2c159-346b-48bc-a47e-b27e41d2cfcd started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-e855e0ba-a4f3-49fa-9229-8fbd67c3266d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-bb4ab8d6-779b-4775-9822-1732fae0e10a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-053ce54e-e913-456c-8d95-2c4dff41363e started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-9a87029c-f477-4979-b80f-d98818e3becb started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-fac6db03-99ea-458f-9615-27c22edb0fb9 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-d2775e5c-53e4-4edd-801b-8f36cacba694 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-8ab972c2-540a-4188-afaa-7354d486fb17 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-subpath-test-configmap-fkjr started at 2021-11-13 05:21:00 +0000 UTC (1+2 container statuses recorded) Nov 13 05:23:59.414: INFO: Init container init-volume-configmap-fkjr ready: true, restart count 0 Nov 13 05:23:59.414: INFO: Container test-container-subpath-configmap-fkjr ready: false, restart count 5 Nov 13 05:23:59.414: INFO: Container test-container-volume-configmap-fkjr ready: true, restart count 0 Nov 13 05:23:59.414: INFO: pod-4de966bd-548b-44c6-9d1d-b85a6c8874d8 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-5c7a13fc-cc8d-4967-b307-f489e6f6aee0 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: pod-4280c57f-89da-453a-9ae0-4bece83c8a8b started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.414: INFO: hostexec-node1-rxrgt started at 2021-11-13 05:21:18 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:23:59.414: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:23:59.414: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.414: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:23:59.414: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.414: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.414: INFO: Container node-exporter ready: true, restart count 0 W1113 05:23:59.426948 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:23:59.726: INFO: Latency metrics for node node1 Nov 13 05:23:59.726: INFO: Logging node info for node node2 Nov 13 05:23:59.729: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 182041 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2661":"csi-mock-csi-mock-volumes-2661"} flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kube-controller-manager Update v1 2021-11-13 05:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}} {kubelet Update v1 2021-11-13 05:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-2661^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-2661^4,DevicePath:,},},Config:nil,},} Nov 13 05:23:59.730: INFO: Logging kubelet events for node node2 Nov 13 05:23:59.732: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 05:23:59.755: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.755: INFO: Container discover ready: false, restart count 0 Nov 13 05:23:59.755: INFO: Container init ready: false, restart count 0 Nov 13 05:23:59.755: INFO: Container install ready: false, restart count 0 Nov 13 05:23:59.755: INFO: pod-secrets-757da914-f8ef-4129-baad-c3c4f780d4ec started at 2021-11-13 05:23:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container creates-volume-test ready: false, restart count 0 Nov 13 05:23:59.755: INFO: csi-mockplugin-0 started at 2021-11-13 05:21:53 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.755: INFO: Container csi-provisioner ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container driver-registrar ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container mock ready: true, restart count 0 Nov 13 05:23:59.755: INFO: csi-mockplugin-0 started at 2021-11-13 05:23:06 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.755: INFO: Container csi-provisioner ready: false, restart count 0 Nov 13 05:23:59.755: INFO: Container driver-registrar ready: false, restart count 0 Nov 13 05:23:59.755: INFO: Container mock ready: false, restart count 0 Nov 13 05:23:59.755: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:23:59.755: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:23:59.755: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:23:59.755: INFO: pod-8a310057-1062-4302-93cf-6e6f763792ff started at 2021-11-13 05:23:58 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.755: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:23:59.755: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:23:59.755: INFO: pvc-volume-tester-4mzt5 started at 2021-11-13 05:22:04 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container volume-tester ready: true, restart count 0 Nov 13 05:23:59.755: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:23:59.755: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:23:59.755: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:23:59.755: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:23:59.755: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:23:59.755: INFO: csi-mockplugin-attacher-0 started at 2021-11-13 05:21:53 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container csi-attacher ready: true, restart count 0 Nov 13 05:23:59.755: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:23:59.755: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:23:59.755: INFO: Container collectd ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:23:59.755: INFO: hostexec-node2-vcss2 started at 2021-11-13 05:23:48 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:23:59.755: INFO: hostexec-node2-99w8n started at 2021-11-13 05:23:47 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:23:59.755: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 05:23:59.755: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:23:59.755: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:23:59.755: INFO: test-hostpath-type-gc6hx started at 2021-11-13 05:23:41 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 13 05:23:59.755: INFO: pod-a04ecd15-585b-48aa-95c3-a320546a4d0b started at 2021-11-13 05:23:57 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:23:59.755: INFO: csi-mockplugin-attacher-0 started at 2021-11-13 05:23:06 +0000 UTC (0+1 container statuses recorded) Nov 13 05:23:59.755: INFO: Container csi-attacher ready: false, restart count 0 W1113 05:23:59.769282 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:00.452: INFO: Latency metrics for node node2 Nov 13 05:24:00.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2227" for this suite. • Failure [52.867 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Nov 13 05:23:58.800: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-a6515418-d3fe-4053-a92e-27f852b4fa4d Unexpected error: <*errors.errorString | 0xc0049df550>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":5,"skipped":309,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:00.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:24:00.529: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:00.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-7720" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:33.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:23:55.411: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend && mount --bind /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend && ln -s /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650] Namespace:persistent-local-volumes-test-8548 PodName:hostexec-node1-7mrvg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:55.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:55.513: INFO: Creating a PV followed by a PVC Nov 13 05:23:55.521: INFO: Waiting for PV local-pvql5qm to bind to PVC pvc-h24j9 Nov 13 05:23:55.521: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h24j9] to have phase Bound Nov 13 05:23:55.523: INFO: PersistentVolumeClaim pvc-h24j9 found but phase is Pending instead of Bound. Nov 13 05:23:57.527: INFO: PersistentVolumeClaim pvc-h24j9 found and phase=Bound (2.005760739s) Nov 13 05:23:57.527: INFO: Waiting up to 3m0s for PersistentVolume local-pvql5qm to have phase Bound Nov 13 05:23:57.529: INFO: PersistentVolume local-pvql5qm found and phase=Bound (2.532137ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:24:01.557: INFO: pod "pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b" created on Node "node1" STEP: Writing in pod1 Nov 13 05:24:01.557: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8548 PodName:pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:01.557: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:01.679: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:24:01.679: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8548 PodName:pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:01.679: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:01.767: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b in namespace persistent-local-volumes-test-8548 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:01.773: INFO: Deleting PersistentVolumeClaim "pvc-h24j9" Nov 13 05:24:01.778: INFO: Deleting PersistentVolume "local-pvql5qm" STEP: Removing the test directory Nov 13 05:24:01.783: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650 && umount /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend && rm -r /tmp/local-volume-test-d508006f-9ddd-4ffd-8c0e-373cfd07d650-backend] Namespace:persistent-local-volumes-test-8548 PodName:hostexec-node1-7mrvg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:01.783: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:01.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8548" for this suite. • [SLOW TEST:28.542 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:18.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342" Nov 13 05:23:02.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342" "/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342"] Namespace:persistent-local-volumes-test-6279 PodName:hostexec-node1-rxrgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:02.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:02.942: INFO: Creating a PV followed by a PVC Nov 13 05:23:02.953: INFO: Waiting for PV local-pvqkjbh to bind to PVC pvc-9q65x Nov 13 05:23:02.953: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9q65x] to have phase Bound Nov 13 05:23:02.955: INFO: PersistentVolumeClaim pvc-9q65x found but phase is Pending instead of Bound. Nov 13 05:23:04.960: INFO: PersistentVolumeClaim pvc-9q65x found and phase=Bound (2.007657057s) Nov 13 05:23:04.960: INFO: Waiting up to 3m0s for PersistentVolume local-pvqkjbh to have phase Bound Nov 13 05:23:04.963: INFO: PersistentVolume local-pvqkjbh found and phase=Bound (2.174692ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:23:56.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6279 exec pod-f1372c29-5244-4105-8181-3cddfa51dd11 --namespace=persistent-local-volumes-test-6279 -- stat -c %g /mnt/volume1' Nov 13 05:23:57.262: INFO: stderr: "" Nov 13 05:23:57.262: INFO: stdout: "0\n" Nov 13 05:23:59.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6279 exec pod-f1372c29-5244-4105-8181-3cddfa51dd11 --namespace=persistent-local-volumes-test-6279 -- stat -c %g /mnt/volume1' Nov 13 05:23:59.944: INFO: stderr: "" Nov 13 05:23:59.944: INFO: stdout: "0\n" Nov 13 05:24:01.945: FAIL: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-f1372c29-5244-4105-8181-3cddfa51dd11 Unexpected error: <*errors.errorString | 0xc003fb1960>: { s: "Failed to find \"1234\", last result: \"0\n\"", } Failed to find "1234", last result: "0 " occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.createPodWithFsGroupTest(0xc00451fd40, 0xc003c76720, 0x4d2, 0x4d2, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 +0x317 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:269 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00239c480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00239c480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00239c480, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:01.947: INFO: Deleting PersistentVolumeClaim "pvc-9q65x" Nov 13 05:24:01.952: INFO: Deleting PersistentVolume "local-pvqkjbh" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342" Nov 13 05:24:01.956: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342"] Namespace:persistent-local-volumes-test-6279 PodName:hostexec-node1-rxrgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:01.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:24:02.092: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c05f6aaf-b842-4bab-8172-e46b7d867342] Namespace:persistent-local-volumes-test-6279 PodName:hostexec-node1-rxrgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:02.092: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-6279". STEP: Found 11 events. Nov 13 05:24:02.207: INFO: At 2021-11-13 05:21:18 +0000 UTC - event for hostexec-node1-rxrgt: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6279/hostexec-node1-rxrgt to node1 Nov 13 05:24:02.207: INFO: At 2021-11-13 05:22:09 +0000 UTC - event for hostexec-node1-rxrgt: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 05:24:02.207: INFO: At 2021-11-13 05:22:12 +0000 UTC - event for hostexec-node1-rxrgt: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.242075251s Nov 13 05:24:02.207: INFO: At 2021-11-13 05:22:12 +0000 UTC - event for hostexec-node1-rxrgt: {kubelet node1} Created: Created container agnhost-container Nov 13 05:24:02.207: INFO: At 2021-11-13 05:22:12 +0000 UTC - event for hostexec-node1-rxrgt: {kubelet node1} Started: Started container agnhost-container Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:04 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6279/pod-f1372c29-5244-4105-8181-3cddfa51dd11 to node1 Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:11 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {kubelet node1} AlreadyMountedVolume: The requested fsGroup is 1234, but the volume local-pvqkjbh has GID 0. The volume may not be shareable. Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:27 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:28 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 287.681214ms Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:29 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {kubelet node1} Created: Created container write-pod Nov 13 05:24:02.207: INFO: At 2021-11-13 05:23:33 +0000 UTC - event for pod-f1372c29-5244-4105-8181-3cddfa51dd11: {kubelet node1} Started: Started container write-pod Nov 13 05:24:02.210: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 05:24:02.210: INFO: hostexec-node1-rxrgt node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:22:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:22:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:21:18 +0000 UTC }] Nov 13 05:24:02.210: INFO: pod-f1372c29-5244-4105-8181-3cddfa51dd11 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 05:23:04 +0000 UTC }] Nov 13 05:24:02.210: INFO: Nov 13 05:24:02.215: INFO: Logging node info for node master1 Nov 13 05:24:02.217: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 182131 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:24:02.218: INFO: Logging kubelet events for node master1 Nov 13 05:24:02.221: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 05:24:02.240: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.240: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:24:02.240: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:24:02.240: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:24:02.241: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:24:02.241: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.241: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:24:02.241: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.241: INFO: Container coredns ready: true, restart count 2 Nov 13 05:24:02.241: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.241: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.241: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:24:02.241: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.241: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 05:24:02.241: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.241: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:24:02.241: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.241: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:24:02.241: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.241: INFO: Container docker-registry ready: true, restart count 0 Nov 13 05:24:02.241: INFO: Container nginx ready: true, restart count 0 W1113 05:24:02.254874 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:02.321: INFO: Latency metrics for node master1 Nov 13 05:24:02.321: INFO: Logging node info for node master2 Nov 13 05:24:02.323: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 182124 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:58 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:24:02.323: INFO: Logging kubelet events for node master2 Nov 13 05:24:02.325: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 05:24:02.332: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:24:02.332: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container coredns ready: true, restart count 1 Nov 13 05:24:02.332: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.332: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:24:02.332: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:24:02.332: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:24:02.332: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:24:02.332: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:24:02.332: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:24:02.332: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:24:02.332: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.332: INFO: Container nfd-controller ready: true, restart count 0 W1113 05:24:02.343812 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:02.417: INFO: Latency metrics for node master2 Nov 13 05:24:02.417: INFO: Logging node info for node master3 Nov 13 05:24:02.420: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 182102 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:56 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:24:02.421: INFO: Logging kubelet events for node master3 Nov 13 05:24:02.423: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 05:24:02.431: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:24:02.431: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 05:24:02.431: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:24:02.431: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.431: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:24:02.431: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:24:02.431: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:24:02.431: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:24:02.431: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:24:02.431: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.431: INFO: Container autoscaler ready: true, restart count 1 W1113 05:24:02.445941 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:02.518: INFO: Latency metrics for node master3 Nov 13 05:24:02.518: INFO: Logging node info for node node1 Nov 13 05:24:02.521: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 182050 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kube-controller-manager Update v1 2021-11-13 05:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-11-13 05:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:54 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:24:02.521: INFO: Logging kubelet events for node node1 Nov 13 05:24:02.523: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 05:24:02.553: INFO: pod-3874376b-734a-4fab-9d6f-e715ed1bc840 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-677cb1b1-fd1d-4d05-81e1-ccf5a06b5008 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-d499fcdf-b3d1-4ca9-a841-5566a5f68a76 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-e76b572a-7140-414b-ab58-c0131097bc1d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.553: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-92f181d7-4a60-4286-82ad-cfba09f806dc started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-b357e320-2d00-4951-bf55-22e62ee10654 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:24:02.553: INFO: Container collectd ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-11f15c16-9aa4-4259-bc85-0a8be022f99a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-9c85862e-8ea9-4f74-9b76-749af1c9f54d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-b324c0a2-79dc-4956-8f82-013475f3f69a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-ac7b74a8-fd8a-4e41-8ccf-39c72291dd29 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-a3b9ad95-b131-4725-8f68-e3905f1c5326 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-3f5ccdf5-fe1b-4bdf-a15a-f3809def4a6d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-d1037eef-a178-4dbf-9cc3-c14e50408a4b started at 2021-11-13 05:23:57 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:24:02.553: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 05:24:02.553: INFO: Container discover ready: false, restart count 0 Nov 13 05:24:02.553: INFO: Container init ready: false, restart count 0 Nov 13 05:24:02.553: INFO: Container install ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-a6515418-d3fe-4053-a92e-27f852b4fa4d started at 2021-11-13 05:23:41 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-e855e0ba-a4f3-49fa-9229-8fbd67c3266d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-bb4ab8d6-779b-4775-9822-1732fae0e10a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-c1f03d55-de00-4602-bf38-9905db9baf2a started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-4bf2c159-346b-48bc-a47e-b27e41d2cfcd started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-053ce54e-e913-456c-8d95-2c4dff41363e started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-5c7a13fc-cc8d-4967-b307-f489e6f6aee0 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-4280c57f-89da-453a-9ae0-4bece83c8a8b started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-9a87029c-f477-4979-b80f-d98818e3becb started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-fac6db03-99ea-458f-9615-27c22edb0fb9 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-d2775e5c-53e4-4edd-801b-8f36cacba694 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-8ab972c2-540a-4188-afaa-7354d486fb17 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-subpath-test-configmap-fkjr started at 2021-11-13 05:21:00 +0000 UTC (1+2 container statuses recorded) Nov 13 05:24:02.553: INFO: Init container init-volume-configmap-fkjr ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container test-container-subpath-configmap-fkjr ready: false, restart count 5 Nov 13 05:24:02.553: INFO: Container test-container-volume-configmap-fkjr ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-4de966bd-548b-44c6-9d1d-b85a6c8874d8 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.553: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:24:02.553: INFO: hostexec-node1-rxrgt started at 2021-11-13 05:21:18 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:24:02.553: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:24:02.553: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:24:02.553: INFO: pod-354df04d-81bc-4986-80ed-ca724a631a0c started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-6a5b7a0b-9d1e-494c-97c8-da2d85248b24 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-3ea59bce-e6a1-4fb1-be0a-7370baa82eaa started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-8f37ef0f-678f-4111-b3f2-4e54a665e006 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-a10b4c10-41dd-4794-b78e-1d660723b244 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-1eef5635-2792-474c-8edd-e95d182d0b7d started at 2021-11-13 05:21:23 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-5a2d4b30-c1d4-40bc-a2aa-59b7e4d5ebe5 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-00022fe9-d7cf-4be9-bc2a-eaed24c21128 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-3d5261fb-539a-4d29-9486-3df54bdbf1de started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-7fc138d7-68a6-4efa-a93c-79922fc3b7bb started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:24:02.553: INFO: pod-97d0a0d6-04de-4699-a9d9-c44abedec6ec started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-beb82e7e-ce27-446a-87bd-6e8ed9d9b85f started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-67e33396-4bde-4c4c-85d8-aa08ee2aaaf9 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-262cd08c-7727-4434-af6a-1e28d2fa0e16 started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-59d9a59c-366c-4337-b21f-4798f16e198c started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:24:02.553: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:24:02.553: INFO: pod-129d6331-48b2-4427-b9d9-6cecc1ff842c started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-40a6ee43-a697-40af-a074-0cd29947a8a8 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-07e30ec0-2756-4ba1-a142-a4cd64c132c6 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-f15180ea-7649-4d3c-9382-582158bdde9d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-c93822fc-470c-4a4d-9431-b7f48f05799c started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-8a0c0b8e-9c90-4bf6-a64b-780e586cea23 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 05:24:02.553: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container grafana ready: true, restart count 0 Nov 13 05:24:02.553: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:24:02.553: INFO: pod-cc632c31-f462-4558-a98a-4546dffa2bcc started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-9043b585-f827-4e2d-8add-a25ce45c8f36 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-f5299b92-be84-4e15-a68c-76730850b24a started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.553: INFO: pod-f1372c29-5244-4105-8181-3cddfa51dd11 started at 2021-11-13 05:23:04 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.553: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:24:02.554: INFO: hostexec-node1-s7chx started at 2021-11-13 05:23:07 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:24:02.554: INFO: pod-115a6904-c055-4883-bf97-b0c305bad80f started at 2021-11-13 05:21:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.554: INFO: pod-561822b3-4bbf-4170-976a-f6527a130261 started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.554: INFO: pod-8bfd123f-86b8-424f-a799-9f7beefe593d started at 2021-11-13 05:21:14 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.554: INFO: hostexec-node1-7mrvg started at 2021-11-13 05:23:33 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:24:02.554: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:24:02.554: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:24:02.554: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:24:02.554: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.554: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:24:02.554: INFO: Container reconcile ready: true, restart count 0 W1113 05:24:02.568011 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:02.849: INFO: Latency metrics for node node1 Nov 13 05:24:02.849: INFO: Logging node info for node node2 Nov 13 05:24:02.852: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 182041 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2661":"csi-mock-csi-mock-volumes-2661"} flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kube-controller-manager Update v1 2021-11-13 05:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}} {kubelet Update v1 2021-11-13 05:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:23:53 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-2661^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-2661^4,DevicePath:,},},Config:nil,},} Nov 13 05:24:02.853: INFO: Logging kubelet events for node node2 Nov 13 05:24:02.855: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 05:24:02.872: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.872: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:24:02.872: INFO: test-hostpath-type-gc6hx started at 2021-11-13 05:23:41 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 13 05:24:02.872: INFO: pod-a04ecd15-585b-48aa-95c3-a320546a4d0b started at 2021-11-13 05:23:57 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:24:02.872: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 05:24:02.872: INFO: Container discover ready: false, restart count 0 Nov 13 05:24:02.872: INFO: Container init ready: false, restart count 0 Nov 13 05:24:02.872: INFO: Container install ready: false, restart count 0 Nov 13 05:24:02.872: INFO: pod-secrets-757da914-f8ef-4129-baad-c3c4f780d4ec started at 2021-11-13 05:23:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container creates-volume-test ready: false, restart count 0 Nov 13 05:24:02.872: INFO: csi-mockplugin-0 started at 2021-11-13 05:21:53 +0000 UTC (0+3 container statuses recorded) Nov 13 05:24:02.872: INFO: Container csi-provisioner ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container driver-registrar ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container mock ready: true, restart count 0 Nov 13 05:24:02.872: INFO: test-hostpath-type-hhcg6 started at 2021-11-13 05:24:00 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container host-path-testing ready: false, restart count 0 Nov 13 05:24:02.872: INFO: hostexec-node2-kg7k4 started at (0+0 container statuses recorded) Nov 13 05:24:02.872: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:24:02.872: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:24:02.872: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:24:02.872: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:24:02.872: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:24:02.872: INFO: pvc-volume-tester-4mzt5 started at 2021-11-13 05:22:04 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container volume-tester ready: true, restart count 0 Nov 13 05:24:02.872: INFO: pod-8a310057-1062-4302-93cf-6e6f763792ff started at 2021-11-13 05:23:58 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:24:02.872: INFO: pod-3a8e8df9-162a-4708-9bc3-daa7ddb80063 started at (0+0 container statuses recorded) Nov 13 05:24:02.872: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:24:02.872: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:24:02.872: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:24:02.872: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:24:02.872: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:24:02.872: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:24:02.872: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:24:02.872: INFO: Container collectd ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:24:02.872: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:24:02.872: INFO: hostexec-node2-vcss2 started at 2021-11-13 05:23:48 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:24:02.872: INFO: csi-mockplugin-attacher-0 started at 2021-11-13 05:21:53 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container csi-attacher ready: true, restart count 0 Nov 13 05:24:02.872: INFO: hostexec-node2-99w8n started at 2021-11-13 05:23:47 +0000 UTC (0+1 container statuses recorded) Nov 13 05:24:02.872: INFO: Container agnhost-container ready: true, restart count 0 W1113 05:24:02.886747 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:24:03.169: INFO: Latency metrics for node node2 Nov 13 05:24:03.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6279" for this suite. • Failure [164.491 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Nov 13 05:24:01.946: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-f1372c29-5244-4105-8181-3cddfa51dd11 Unexpected error: <*errors.errorString | 0xc003fb1960>: { s: "Failed to find \"1234\", last result: \"0\n\"", } Failed to find "1234", last result: "0 " occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":185,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:48.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:23:50.173: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-98d2e2db-7472-4408-8ae2-f9a9c42a6363] Namespace:persistent-local-volumes-test-8809 PodName:hostexec-node2-vcss2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:50.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:50.269: INFO: Creating a PV followed by a PVC Nov 13 05:23:50.276: INFO: Waiting for PV local-pvqk9w6 to bind to PVC pvc-gfwmh Nov 13 05:23:50.276: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gfwmh] to have phase Bound Nov 13 05:23:50.278: INFO: PersistentVolumeClaim pvc-gfwmh found but phase is Pending instead of Bound. Nov 13 05:23:52.284: INFO: PersistentVolumeClaim pvc-gfwmh found but phase is Pending instead of Bound. Nov 13 05:23:54.289: INFO: PersistentVolumeClaim pvc-gfwmh found but phase is Pending instead of Bound. Nov 13 05:23:56.292: INFO: PersistentVolumeClaim pvc-gfwmh found but phase is Pending instead of Bound. Nov 13 05:23:58.298: INFO: PersistentVolumeClaim pvc-gfwmh found and phase=Bound (8.022289618s) Nov 13 05:23:58.298: INFO: Waiting up to 3m0s for PersistentVolume local-pvqk9w6 to have phase Bound Nov 13 05:23:58.300: INFO: PersistentVolume local-pvqk9w6 found and phase=Bound (1.946016ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:24:06.328: INFO: pod "pod-8a310057-1062-4302-93cf-6e6f763792ff" created on Node "node2" STEP: Writing in pod1 Nov 13 05:24:06.328: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8809 PodName:pod-8a310057-1062-4302-93cf-6e6f763792ff ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:06.328: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:06.412: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:24:06.412: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8809 PodName:pod-8a310057-1062-4302-93cf-6e6f763792ff ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:06.412: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:06.499: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:24:06.499: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-98d2e2db-7472-4408-8ae2-f9a9c42a6363 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8809 PodName:pod-8a310057-1062-4302-93cf-6e6f763792ff ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:06.499: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:06.576: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-98d2e2db-7472-4408-8ae2-f9a9c42a6363 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-8a310057-1062-4302-93cf-6e6f763792ff in namespace persistent-local-volumes-test-8809 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:06.582: INFO: Deleting PersistentVolumeClaim "pvc-gfwmh" Nov 13 05:24:06.588: INFO: Deleting PersistentVolume "local-pvqk9w6" STEP: Removing the test directory Nov 13 05:24:06.593: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-98d2e2db-7472-4408-8ae2-f9a9c42a6363] Namespace:persistent-local-volumes-test-8809 PodName:hostexec-node2-vcss2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:06.593: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:06.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8809" for this suite. • [SLOW TEST:18.567 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":140,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:06.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:24:06.725: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:06.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-7596" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:00.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:24:00.649: INFO: The status of Pod test-hostpath-type-hhcg6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:02.652: INFO: The status of Pod test-hostpath-type-hhcg6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:04.655: INFO: The status of Pod test-hostpath-type-hhcg6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:06.654: INFO: The status of Pod test-hostpath-type-hhcg6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:08.658: INFO: The status of Pod test-hostpath-type-hhcg6 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:24:08.660: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-8616 PodName:test-hostpath-type-hhcg6 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:08.660: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:10.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-8616" for this suite. • [SLOW TEST:10.196 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":6,"skipped":362,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:02.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:24:10.062: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-41d12af8-9663-4de3-aeaa-3b088aa3255f-backend && ln -s /tmp/local-volume-test-41d12af8-9663-4de3-aeaa-3b088aa3255f-backend /tmp/local-volume-test-41d12af8-9663-4de3-aeaa-3b088aa3255f] Namespace:persistent-local-volumes-test-4193 PodName:hostexec-node2-kg7k4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:10.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:24:10.164: INFO: Creating a PV followed by a PVC Nov 13 05:24:10.172: INFO: Waiting for PV local-pvrm275 to bind to PVC pvc-2lfq5 Nov 13 05:24:10.172: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2lfq5] to have phase Bound Nov 13 05:24:10.174: INFO: PersistentVolumeClaim pvc-2lfq5 found but phase is Pending instead of Bound. Nov 13 05:24:12.178: INFO: PersistentVolumeClaim pvc-2lfq5 found and phase=Bound (2.006265651s) Nov 13 05:24:12.178: INFO: Waiting up to 3m0s for PersistentVolume local-pvrm275 to have phase Bound Nov 13 05:24:12.180: INFO: PersistentVolume local-pvrm275 found and phase=Bound (1.712139ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:24:12.183: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:12.185: INFO: Deleting PersistentVolumeClaim "pvc-2lfq5" Nov 13 05:24:12.188: INFO: Deleting PersistentVolume "local-pvrm275" STEP: Removing the test directory Nov 13 05:24:12.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-41d12af8-9663-4de3-aeaa-3b088aa3255f && rm -r /tmp/local-volume-test-41d12af8-9663-4de3-aeaa-3b088aa3255f-backend] Namespace:persistent-local-volumes-test-4193 PodName:hostexec-node2-kg7k4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:12.192: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:12.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4193" for this suite. S [SKIPPING] [10.314 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:47.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:23:49.300: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5b23caad-2737-43d3-9c35-72f2a2534f81] Namespace:persistent-local-volumes-test-1959 PodName:hostexec-node2-99w8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:23:49.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:23:49.387: INFO: Creating a PV followed by a PVC Nov 13 05:23:49.394: INFO: Waiting for PV local-pvdt774 to bind to PVC pvc-g58sj Nov 13 05:23:49.394: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-g58sj] to have phase Bound Nov 13 05:23:49.396: INFO: PersistentVolumeClaim pvc-g58sj found but phase is Pending instead of Bound. Nov 13 05:23:51.401: INFO: PersistentVolumeClaim pvc-g58sj found but phase is Pending instead of Bound. Nov 13 05:23:53.404: INFO: PersistentVolumeClaim pvc-g58sj found but phase is Pending instead of Bound. Nov 13 05:23:55.409: INFO: PersistentVolumeClaim pvc-g58sj found but phase is Pending instead of Bound. Nov 13 05:23:57.413: INFO: PersistentVolumeClaim pvc-g58sj found and phase=Bound (8.018975762s) Nov 13 05:23:57.413: INFO: Waiting up to 3m0s for PersistentVolume local-pvdt774 to have phase Bound Nov 13 05:23:57.416: INFO: PersistentVolume local-pvdt774 found and phase=Bound (2.752462ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:24:01.443: INFO: pod "pod-a04ecd15-585b-48aa-95c3-a320546a4d0b" created on Node "node2" STEP: Writing in pod1 Nov 13 05:24:01.443: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1959 PodName:pod-a04ecd15-585b-48aa-95c3-a320546a4d0b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:01.443: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:02.130: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:24:02.130: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1959 PodName:pod-a04ecd15-585b-48aa-95c3-a320546a4d0b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:02.130: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:02.210: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-a04ecd15-585b-48aa-95c3-a320546a4d0b in namespace persistent-local-volumes-test-1959 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:24:14.235: INFO: pod "pod-3a8e8df9-162a-4708-9bc3-daa7ddb80063" created on Node "node2" STEP: Reading in pod2 Nov 13 05:24:14.235: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1959 PodName:pod-3a8e8df9-162a-4708-9bc3-daa7ddb80063 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:24:14.235: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:14.315: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-3a8e8df9-162a-4708-9bc3-daa7ddb80063 in namespace persistent-local-volumes-test-1959 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:14.320: INFO: Deleting PersistentVolumeClaim "pvc-g58sj" Nov 13 05:24:14.324: INFO: Deleting PersistentVolume "local-pvdt774" STEP: Removing the test directory Nov 13 05:24:14.328: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5b23caad-2737-43d3-9c35-72f2a2534f81] Namespace:persistent-local-volumes-test-1959 PodName:hostexec-node2-99w8n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:14.328: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:14.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1959" for this suite. • [SLOW TEST:27.189 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":360,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:06.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7562 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:23:06.111: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-attacher Nov 13 05:23:06.114: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7562 Nov 13 05:23:06.114: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7562 Nov 13 05:23:06.117: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7562 Nov 13 05:23:06.119: INFO: creating *v1.Role: csi-mock-volumes-7562-7495/external-attacher-cfg-csi-mock-volumes-7562 Nov 13 05:23:06.122: INFO: creating *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-attacher-role-cfg Nov 13 05:23:06.125: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-provisioner Nov 13 05:23:06.127: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7562 Nov 13 05:23:06.127: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7562 Nov 13 05:23:06.131: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7562 Nov 13 05:23:06.134: INFO: creating *v1.Role: csi-mock-volumes-7562-7495/external-provisioner-cfg-csi-mock-volumes-7562 Nov 13 05:23:06.137: INFO: creating *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-provisioner-role-cfg Nov 13 05:23:06.140: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-resizer Nov 13 05:23:06.143: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7562 Nov 13 05:23:06.143: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7562 Nov 13 05:23:06.146: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7562 Nov 13 05:23:06.148: INFO: creating *v1.Role: csi-mock-volumes-7562-7495/external-resizer-cfg-csi-mock-volumes-7562 Nov 13 05:23:06.151: INFO: creating *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-resizer-role-cfg Nov 13 05:23:06.153: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-snapshotter Nov 13 05:23:06.156: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7562 Nov 13 05:23:06.156: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7562 Nov 13 05:23:06.159: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7562 Nov 13 05:23:06.162: INFO: creating *v1.Role: csi-mock-volumes-7562-7495/external-snapshotter-leaderelection-csi-mock-volumes-7562 Nov 13 05:23:06.165: INFO: creating *v1.RoleBinding: csi-mock-volumes-7562-7495/external-snapshotter-leaderelection Nov 13 05:23:06.167: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-mock Nov 13 05:23:06.170: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7562 Nov 13 05:23:06.172: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7562 Nov 13 05:23:06.175: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7562 Nov 13 05:23:06.178: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7562 Nov 13 05:23:06.180: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7562 Nov 13 05:23:06.183: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7562 Nov 13 05:23:06.186: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7562 Nov 13 05:23:06.189: INFO: creating *v1.StatefulSet: csi-mock-volumes-7562-7495/csi-mockplugin Nov 13 05:23:06.193: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7562 Nov 13 05:23:06.196: INFO: creating *v1.StatefulSet: csi-mock-volumes-7562-7495/csi-mockplugin-attacher Nov 13 05:23:06.199: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7562" Nov 13 05:23:06.201: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7562 to register on node node2 STEP: Creating pod Nov 13 05:23:20.720: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:23:30.745: INFO: Deleting pod "pvc-volume-tester-sw65c" in namespace "csi-mock-volumes-7562" Nov 13 05:23:30.751: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sw65c" to be fully deleted STEP: Deleting pod pvc-volume-tester-sw65c Nov 13 05:23:42.759: INFO: Deleting pod "pvc-volume-tester-sw65c" in namespace "csi-mock-volumes-7562" STEP: Deleting claim pvc-cdqnf Nov 13 05:23:42.768: INFO: Waiting up to 2m0s for PersistentVolume pvc-0360d896-c3ab-4fe6-80cc-0e01b38031f5 to get deleted Nov 13 05:23:42.770: INFO: PersistentVolume pvc-0360d896-c3ab-4fe6-80cc-0e01b38031f5 found and phase=Bound (1.996287ms) Nov 13 05:23:44.775: INFO: PersistentVolume pvc-0360d896-c3ab-4fe6-80cc-0e01b38031f5 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7562 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7562 STEP: Waiting for namespaces [csi-mock-volumes-7562] to vanish STEP: uninstalling csi mock driver Nov 13 05:23:50.788: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-attacher Nov 13 05:23:50.792: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7562 Nov 13 05:23:50.796: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7562 Nov 13 05:23:50.800: INFO: deleting *v1.Role: csi-mock-volumes-7562-7495/external-attacher-cfg-csi-mock-volumes-7562 Nov 13 05:23:50.803: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-attacher-role-cfg Nov 13 05:23:50.807: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-provisioner Nov 13 05:23:50.810: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7562 Nov 13 05:23:50.814: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7562 Nov 13 05:23:50.817: INFO: deleting *v1.Role: csi-mock-volumes-7562-7495/external-provisioner-cfg-csi-mock-volumes-7562 Nov 13 05:23:50.821: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-provisioner-role-cfg Nov 13 05:23:50.824: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-resizer Nov 13 05:23:50.827: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7562 Nov 13 05:23:50.831: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7562 Nov 13 05:23:50.834: INFO: deleting *v1.Role: csi-mock-volumes-7562-7495/external-resizer-cfg-csi-mock-volumes-7562 Nov 13 05:23:50.837: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7562-7495/csi-resizer-role-cfg Nov 13 05:23:50.842: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-snapshotter Nov 13 05:23:50.845: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7562 Nov 13 05:23:50.849: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7562 Nov 13 05:23:50.852: INFO: deleting *v1.Role: csi-mock-volumes-7562-7495/external-snapshotter-leaderelection-csi-mock-volumes-7562 Nov 13 05:23:50.855: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7562-7495/external-snapshotter-leaderelection Nov 13 05:23:50.859: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7562-7495/csi-mock Nov 13 05:23:50.863: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7562 Nov 13 05:23:50.867: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7562 Nov 13 05:23:50.870: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7562 Nov 13 05:23:50.873: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7562 Nov 13 05:23:50.876: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7562 Nov 13 05:23:50.880: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7562 Nov 13 05:23:50.883: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7562 Nov 13 05:23:50.886: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7562-7495/csi-mockplugin Nov 13 05:23:50.889: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7562 Nov 13 05:23:50.893: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7562-7495/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7562-7495 STEP: Waiting for namespaces [csi-mock-volumes-7562-7495] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:18.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:72.861 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":6,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:19.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:24:19.040: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:24:19.046: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:19.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-1766" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:12.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:24:12.367: INFO: The status of Pod test-hostpath-type-4j9bp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:14.370: INFO: The status of Pod test-hostpath-type-4j9bp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:16.372: INFO: The status of Pod test-hostpath-type-4j9bp is Running (Ready = true) STEP: running on node node1 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:20.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7994" for this suite. • [SLOW TEST:8.075 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:10.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:24:10.948: INFO: The status of Pod test-hostpath-type-2cz2g is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:12.952: INFO: The status of Pod test-hostpath-type-2cz2g is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:24:14.954: INFO: The status of Pod test-hostpath-type-2cz2g is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:21.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-8188" for this suite. • [SLOW TEST:10.100 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":7,"skipped":408,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:21.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 13 05:24:21.131: INFO: Waiting up to 5m0s for pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509" in namespace "emptydir-7206" to be "Succeeded or Failed" Nov 13 05:24:21.136: INFO: Pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.818253ms Nov 13 05:24:23.140: INFO: Pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00889512s Nov 13 05:24:25.145: INFO: Pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013975147s Nov 13 05:24:27.154: INFO: Pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022624178s STEP: Saw pod success Nov 13 05:24:27.154: INFO: Pod "pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509" satisfied condition "Succeeded or Failed" Nov 13 05:24:27.157: INFO: Trying to get logs from node node2 pod pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509 container test-container: STEP: delete the pod Nov 13 05:24:27.948: INFO: Waiting for pod pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509 to disappear Nov 13 05:24:27.950: INFO: Pod pod-e10ea8ba-5787-4d28-b209-21f4fc9f6509 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:27.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7206" for this suite. • [SLOW TEST:6.858 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":450,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:27.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:24:27.998: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:24:28.002: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:28.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7762" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":8,"skipped":318,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:20.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30" Nov 13 05:24:24.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30" "/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30"] Namespace:persistent-local-volumes-test-1719 PodName:hostexec-node2-wptdb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:24.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:24:24.545: INFO: Creating a PV followed by a PVC Nov 13 05:24:24.552: INFO: Waiting for PV local-pv4wz2n to bind to PVC pvc-v9v65 Nov 13 05:24:24.552: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-v9v65] to have phase Bound Nov 13 05:24:24.555: INFO: PersistentVolumeClaim pvc-v9v65 found but phase is Pending instead of Bound. Nov 13 05:24:26.559: INFO: PersistentVolumeClaim pvc-v9v65 found and phase=Bound (2.00647927s) Nov 13 05:24:26.559: INFO: Waiting up to 3m0s for PersistentVolume local-pv4wz2n to have phase Bound Nov 13 05:24:26.561: INFO: PersistentVolume local-pv4wz2n found and phase=Bound (2.327516ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:24:30.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1719 exec pod-d035ee1d-12db-4fba-a688-1d1422909d82 --namespace=persistent-local-volumes-test-1719 -- stat -c %g /mnt/volume1' Nov 13 05:24:30.840: INFO: stderr: "" Nov 13 05:24:30.840: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:24:36.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1719 exec pod-959a31a9-ee43-4ea7-bd92-324633afd79f --namespace=persistent-local-volumes-test-1719 -- stat -c %g /mnt/volume1' Nov 13 05:24:37.101: INFO: stderr: "" Nov 13 05:24:37.101: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-d035ee1d-12db-4fba-a688-1d1422909d82 in namespace persistent-local-volumes-test-1719 STEP: Deleting second pod STEP: Deleting pod pod-959a31a9-ee43-4ea7-bd92-324633afd79f in namespace persistent-local-volumes-test-1719 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:37.111: INFO: Deleting PersistentVolumeClaim "pvc-v9v65" Nov 13 05:24:37.115: INFO: Deleting PersistentVolume "local-pv4wz2n" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30" Nov 13 05:24:37.119: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30"] Namespace:persistent-local-volumes-test-1719 PodName:hostexec-node2-wptdb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:37.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:24:37.206: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6bb6668d-cced-40e8-9dbd-6f1793b10d30] Namespace:persistent-local-volumes-test-1719 PodName:hostexec-node2-wptdb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:37.206: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:37.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1719" for this suite. • [SLOW TEST:16.916 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":9,"skipped":318,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:28.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:24:36.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c129e3ad-18c3-4263-bf69-1aa984266d74] Namespace:persistent-local-volumes-test-3577 PodName:hostexec-node1-bf8dl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:36.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:24:36.295: INFO: Creating a PV followed by a PVC Nov 13 05:24:36.304: INFO: Waiting for PV local-pvsghsv to bind to PVC pvc-jkp28 Nov 13 05:24:36.304: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jkp28] to have phase Bound Nov 13 05:24:36.307: INFO: PersistentVolumeClaim pvc-jkp28 found but phase is Pending instead of Bound. Nov 13 05:24:38.311: INFO: PersistentVolumeClaim pvc-jkp28 found and phase=Bound (2.006498733s) Nov 13 05:24:38.311: INFO: Waiting up to 3m0s for PersistentVolume local-pvsghsv to have phase Bound Nov 13 05:24:38.314: INFO: PersistentVolume local-pvsghsv found and phase=Bound (2.833619ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:24:38.319: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:38.321: INFO: Deleting PersistentVolumeClaim "pvc-jkp28" Nov 13 05:24:38.324: INFO: Deleting PersistentVolume "local-pvsghsv" STEP: Removing the test directory Nov 13 05:24:38.328: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c129e3ad-18c3-4263-bf69-1aa984266d74] Namespace:persistent-local-volumes-test-3577 PodName:hostexec-node1-bf8dl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:38.328: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:38.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3577" for this suite. S [SKIPPING] [10.509 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:06.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-6116 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:24:06.807: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-attacher Nov 13 05:24:06.810: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6116 Nov 13 05:24:06.810: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6116 Nov 13 05:24:06.812: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6116 Nov 13 05:24:06.815: INFO: creating *v1.Role: csi-mock-volumes-6116-8839/external-attacher-cfg-csi-mock-volumes-6116 Nov 13 05:24:06.818: INFO: creating *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-attacher-role-cfg Nov 13 05:24:06.820: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-provisioner Nov 13 05:24:06.823: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6116 Nov 13 05:24:06.824: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6116 Nov 13 05:24:06.826: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6116 Nov 13 05:24:06.829: INFO: creating *v1.Role: csi-mock-volumes-6116-8839/external-provisioner-cfg-csi-mock-volumes-6116 Nov 13 05:24:06.831: INFO: creating *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-provisioner-role-cfg Nov 13 05:24:06.834: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-resizer Nov 13 05:24:06.836: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6116 Nov 13 05:24:06.836: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6116 Nov 13 05:24:06.839: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6116 Nov 13 05:24:06.841: INFO: creating *v1.Role: csi-mock-volumes-6116-8839/external-resizer-cfg-csi-mock-volumes-6116 Nov 13 05:24:06.844: INFO: creating *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-resizer-role-cfg Nov 13 05:24:06.846: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-snapshotter Nov 13 05:24:06.849: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6116 Nov 13 05:24:06.849: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6116 Nov 13 05:24:06.851: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6116 Nov 13 05:24:06.854: INFO: creating *v1.Role: csi-mock-volumes-6116-8839/external-snapshotter-leaderelection-csi-mock-volumes-6116 Nov 13 05:24:06.857: INFO: creating *v1.RoleBinding: csi-mock-volumes-6116-8839/external-snapshotter-leaderelection Nov 13 05:24:06.860: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-mock Nov 13 05:24:06.863: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6116 Nov 13 05:24:06.865: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6116 Nov 13 05:24:06.868: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6116 Nov 13 05:24:06.871: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6116 Nov 13 05:24:06.873: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6116 Nov 13 05:24:06.876: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6116 Nov 13 05:24:06.878: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6116 Nov 13 05:24:06.882: INFO: creating *v1.StatefulSet: csi-mock-volumes-6116-8839/csi-mockplugin Nov 13 05:24:06.886: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6116 Nov 13 05:24:06.888: INFO: creating *v1.StatefulSet: csi-mock-volumes-6116-8839/csi-mockplugin-attacher Nov 13 05:24:06.892: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6116" Nov 13 05:24:06.894: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6116 to register on node node1 STEP: Creating pod Nov 13 05:24:21.412: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:24:21.431: INFO: Deleting pod "pvc-volume-tester-9xkbn" in namespace "csi-mock-volumes-6116" Nov 13 05:24:21.436: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9xkbn" to be fully deleted STEP: Deleting pod pvc-volume-tester-9xkbn Nov 13 05:24:21.438: INFO: Deleting pod "pvc-volume-tester-9xkbn" in namespace "csi-mock-volumes-6116" STEP: Deleting claim pvc-cskp6 STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-6116 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6116 STEP: Waiting for namespaces [csi-mock-volumes-6116] to vanish STEP: uninstalling csi mock driver Nov 13 05:24:27.460: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-attacher Nov 13 05:24:27.464: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6116 Nov 13 05:24:27.467: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6116 Nov 13 05:24:27.471: INFO: deleting *v1.Role: csi-mock-volumes-6116-8839/external-attacher-cfg-csi-mock-volumes-6116 Nov 13 05:24:27.474: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-attacher-role-cfg Nov 13 05:24:27.479: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-provisioner Nov 13 05:24:27.483: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6116 Nov 13 05:24:27.486: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6116 Nov 13 05:24:27.490: INFO: deleting *v1.Role: csi-mock-volumes-6116-8839/external-provisioner-cfg-csi-mock-volumes-6116 Nov 13 05:24:27.493: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-provisioner-role-cfg Nov 13 05:24:27.496: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-resizer Nov 13 05:24:27.499: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6116 Nov 13 05:24:27.503: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6116 Nov 13 05:24:27.508: INFO: deleting *v1.Role: csi-mock-volumes-6116-8839/external-resizer-cfg-csi-mock-volumes-6116 Nov 13 05:24:27.511: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6116-8839/csi-resizer-role-cfg Nov 13 05:24:27.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-snapshotter Nov 13 05:24:27.518: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6116 Nov 13 05:24:27.525: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6116 Nov 13 05:24:27.532: INFO: deleting *v1.Role: csi-mock-volumes-6116-8839/external-snapshotter-leaderelection-csi-mock-volumes-6116 Nov 13 05:24:27.539: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6116-8839/external-snapshotter-leaderelection Nov 13 05:24:27.546: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6116-8839/csi-mock Nov 13 05:24:27.549: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6116 Nov 13 05:24:27.552: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6116 Nov 13 05:24:27.555: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6116 Nov 13 05:24:27.559: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6116 Nov 13 05:24:27.562: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6116 Nov 13 05:24:27.566: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6116 Nov 13 05:24:27.570: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6116 Nov 13 05:24:27.573: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6116-8839/csi-mockplugin Nov 13 05:24:27.577: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6116 Nov 13 05:24:27.580: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6116-8839/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6116-8839 STEP: Waiting for namespaces [csi-mock-volumes-6116-8839] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:39.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:32.850 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":5,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:39.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:24:39.750: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:39.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4599" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:53.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-2661 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:21:53.094: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-attacher Nov 13 05:21:53.096: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2661 Nov 13 05:21:53.096: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2661 Nov 13 05:21:53.098: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2661 Nov 13 05:21:53.101: INFO: creating *v1.Role: csi-mock-volumes-2661-9353/external-attacher-cfg-csi-mock-volumes-2661 Nov 13 05:21:53.103: INFO: creating *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-attacher-role-cfg Nov 13 05:21:53.106: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-provisioner Nov 13 05:21:53.108: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2661 Nov 13 05:21:53.108: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2661 Nov 13 05:21:53.111: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2661 Nov 13 05:21:53.115: INFO: creating *v1.Role: csi-mock-volumes-2661-9353/external-provisioner-cfg-csi-mock-volumes-2661 Nov 13 05:21:53.117: INFO: creating *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-provisioner-role-cfg Nov 13 05:21:53.120: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-resizer Nov 13 05:21:53.122: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2661 Nov 13 05:21:53.122: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2661 Nov 13 05:21:53.125: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2661 Nov 13 05:21:53.128: INFO: creating *v1.Role: csi-mock-volumes-2661-9353/external-resizer-cfg-csi-mock-volumes-2661 Nov 13 05:21:53.130: INFO: creating *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-resizer-role-cfg Nov 13 05:21:53.134: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-snapshotter Nov 13 05:21:53.137: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2661 Nov 13 05:21:53.137: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2661 Nov 13 05:21:53.140: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2661 Nov 13 05:21:53.143: INFO: creating *v1.Role: csi-mock-volumes-2661-9353/external-snapshotter-leaderelection-csi-mock-volumes-2661 Nov 13 05:21:53.146: INFO: creating *v1.RoleBinding: csi-mock-volumes-2661-9353/external-snapshotter-leaderelection Nov 13 05:21:53.149: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-mock Nov 13 05:21:53.151: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2661 Nov 13 05:21:53.153: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2661 Nov 13 05:21:53.156: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2661 Nov 13 05:21:53.158: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2661 Nov 13 05:21:53.161: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2661 Nov 13 05:21:53.163: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2661 Nov 13 05:21:53.165: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2661 Nov 13 05:21:53.168: INFO: creating *v1.StatefulSet: csi-mock-volumes-2661-9353/csi-mockplugin Nov 13 05:21:53.172: INFO: creating *v1.StatefulSet: csi-mock-volumes-2661-9353/csi-mockplugin-attacher Nov 13 05:21:53.176: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2661 to register on node node2 STEP: Creating pod Nov 13 05:22:02.694: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:22:02.699: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-s25tn] to have phase Bound Nov 13 05:22:02.702: INFO: PersistentVolumeClaim pvc-s25tn found but phase is Pending instead of Bound. Nov 13 05:22:04.705: INFO: PersistentVolumeClaim pvc-s25tn found and phase=Bound (2.005899024s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-4mzt5 Nov 13 05:24:18.744: INFO: Deleting pod "pvc-volume-tester-4mzt5" in namespace "csi-mock-volumes-2661" Nov 13 05:24:18.749: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4mzt5" to be fully deleted STEP: Deleting claim pvc-s25tn Nov 13 05:24:22.761: INFO: Waiting up to 2m0s for PersistentVolume pvc-72a29822-d085-4c1a-b8e7-3bc1a87591ce to get deleted Nov 13 05:24:22.764: INFO: PersistentVolume pvc-72a29822-d085-4c1a-b8e7-3bc1a87591ce found and phase=Bound (2.123559ms) Nov 13 05:24:24.767: INFO: PersistentVolume pvc-72a29822-d085-4c1a-b8e7-3bc1a87591ce found and phase=Released (2.005178973s) Nov 13 05:24:26.773: INFO: PersistentVolume pvc-72a29822-d085-4c1a-b8e7-3bc1a87591ce was removed STEP: Deleting storageclass csi-mock-volumes-2661-scvv64d STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2661 STEP: Waiting for namespaces [csi-mock-volumes-2661] to vanish STEP: uninstalling csi mock driver Nov 13 05:24:32.787: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-attacher Nov 13 05:24:32.793: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2661 Nov 13 05:24:32.797: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2661 Nov 13 05:24:32.800: INFO: deleting *v1.Role: csi-mock-volumes-2661-9353/external-attacher-cfg-csi-mock-volumes-2661 Nov 13 05:24:32.804: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-attacher-role-cfg Nov 13 05:24:32.807: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-provisioner Nov 13 05:24:32.810: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2661 Nov 13 05:24:32.814: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2661 Nov 13 05:24:32.817: INFO: deleting *v1.Role: csi-mock-volumes-2661-9353/external-provisioner-cfg-csi-mock-volumes-2661 Nov 13 05:24:32.820: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-provisioner-role-cfg Nov 13 05:24:32.823: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-resizer Nov 13 05:24:32.828: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2661 Nov 13 05:24:32.831: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2661 Nov 13 05:24:32.835: INFO: deleting *v1.Role: csi-mock-volumes-2661-9353/external-resizer-cfg-csi-mock-volumes-2661 Nov 13 05:24:32.839: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2661-9353/csi-resizer-role-cfg Nov 13 05:24:32.842: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-snapshotter Nov 13 05:24:32.847: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2661 Nov 13 05:24:32.851: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2661 Nov 13 05:24:32.854: INFO: deleting *v1.Role: csi-mock-volumes-2661-9353/external-snapshotter-leaderelection-csi-mock-volumes-2661 Nov 13 05:24:32.857: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2661-9353/external-snapshotter-leaderelection Nov 13 05:24:32.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2661-9353/csi-mock Nov 13 05:24:32.864: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2661 Nov 13 05:24:32.867: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2661 Nov 13 05:24:32.870: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2661 Nov 13 05:24:32.873: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2661 Nov 13 05:24:32.877: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2661 Nov 13 05:24:32.881: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2661 Nov 13 05:24:32.885: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2661 Nov 13 05:24:32.889: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2661-9353/csi-mockplugin Nov 13 05:24:32.892: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2661-9353/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2661-9353 STEP: Waiting for namespaces [csi-mock-volumes-2661-9353] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:44.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:171.878 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":9,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:00.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-fkjr STEP: Failing liveness probe Nov 13 05:21:16.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=subpath-2761 exec pod-subpath-test-configmap-fkjr --container test-container-volume-configmap-fkjr -- /bin/sh -c rm /probe-volume/probe-file' Nov 13 05:21:17.302: INFO: stderr: "" Nov 13 05:21:17.302: INFO: stdout: "" Nov 13 05:21:17.302: INFO: Pod exec output: STEP: Waiting for container to restart Nov 13 05:21:17.305: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:21:27.309: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:21:37.310: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:21:47.309: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:21:57.309: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:22:07.314: INFO: Container test-container-subpath-configmap-fkjr, restarts: 0 Nov 13 05:22:17.309: INFO: Container test-container-subpath-configmap-fkjr, restarts: 1 Nov 13 05:22:17.309: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Nov 13 05:22:27.319: INFO: Container has restart count: 3 Nov 13 05:23:07.324: INFO: Container has restart count: 4 Nov 13 05:23:35.320: INFO: Container has restart count: 5 Nov 13 05:24:37.319: INFO: Container restart has stabilized Nov 13 05:24:37.319: INFO: Deleting pod "pod-subpath-test-configmap-fkjr" in namespace "subpath-2761" Nov 13 05:24:37.323: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-fkjr" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:55.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2761" for this suite. • [SLOW TEST:234.561 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":3,"skipped":63,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:55.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should not provision a volume in an unmanaged GCE zone. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Nov 13 05:24:55.417: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:55.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5363" for this suite. S [SKIPPING] [0.035 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should not provision a volume in an unmanaged GCE zone. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:452 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:38.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:24:58.713: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7325 PodName:hostexec-node1-lmjbt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:58.713: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:58.807: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:24:58.807: INFO: exec node1: stdout: "0\n" Nov 13 05:24:58.807: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:24:58.807: INFO: exec node1: exit code: 0 Nov 13 05:24:58.807: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:58.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7325" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [20.156 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:14.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:24:16.559: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bd44a943-6a95-4517-b6ea-8ef2894c6e4c-backend && ln -s /tmp/local-volume-test-bd44a943-6a95-4517-b6ea-8ef2894c6e4c-backend /tmp/local-volume-test-bd44a943-6a95-4517-b6ea-8ef2894c6e4c] Namespace:persistent-local-volumes-test-1572 PodName:hostexec-node1-x9ftr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:16.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:24:16.653: INFO: Creating a PV followed by a PVC Nov 13 05:24:16.660: INFO: Waiting for PV local-pvd8hv4 to bind to PVC pvc-5nrf5 Nov 13 05:24:16.660: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5nrf5] to have phase Bound Nov 13 05:24:16.662: INFO: PersistentVolumeClaim pvc-5nrf5 found but phase is Pending instead of Bound. Nov 13 05:24:18.669: INFO: PersistentVolumeClaim pvc-5nrf5 found but phase is Pending instead of Bound. Nov 13 05:24:20.671: INFO: PersistentVolumeClaim pvc-5nrf5 found but phase is Pending instead of Bound. Nov 13 05:24:22.676: INFO: PersistentVolumeClaim pvc-5nrf5 found but phase is Pending instead of Bound. Nov 13 05:24:24.682: INFO: PersistentVolumeClaim pvc-5nrf5 found but phase is Pending instead of Bound. Nov 13 05:24:26.687: INFO: PersistentVolumeClaim pvc-5nrf5 found and phase=Bound (10.0268747s) Nov 13 05:24:26.687: INFO: Waiting up to 3m0s for PersistentVolume local-pvd8hv4 to have phase Bound Nov 13 05:24:26.689: INFO: PersistentVolume local-pvd8hv4 found and phase=Bound (1.98479ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:24:36.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1572 exec pod-36279c5e-bb53-4eff-a8e1-153d764fe486 --namespace=persistent-local-volumes-test-1572 -- stat -c %g /mnt/volume1' Nov 13 05:24:36.955: INFO: stderr: "" Nov 13 05:24:36.955: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:24:58.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1572 exec pod-fdfd982d-eeed-42a5-85cb-c97a8098ef81 --namespace=persistent-local-volumes-test-1572 -- stat -c %g /mnt/volume1' Nov 13 05:24:59.236: INFO: stderr: "" Nov 13 05:24:59.236: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-36279c5e-bb53-4eff-a8e1-153d764fe486 in namespace persistent-local-volumes-test-1572 STEP: Deleting second pod STEP: Deleting pod pod-fdfd982d-eeed-42a5-85cb-c97a8098ef81 in namespace persistent-local-volumes-test-1572 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:24:59.246: INFO: Deleting PersistentVolumeClaim "pvc-5nrf5" Nov 13 05:24:59.250: INFO: Deleting PersistentVolume "local-pvd8hv4" STEP: Removing the test directory Nov 13 05:24:59.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bd44a943-6a95-4517-b6ea-8ef2894c6e4c && rm -r /tmp/local-volume-test-bd44a943-6a95-4517-b6ea-8ef2894c6e4c-backend] Namespace:persistent-local-volumes-test-1572 PodName:hostexec-node1-x9ftr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.254: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:24:59.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1572" for this suite. • [SLOW TEST:44.874 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":14,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:58.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Nov 13 05:24:58.877: INFO: Waiting up to 5m0s for pod "pod-dedfea29-fe13-4caa-8990-be4935c5050a" in namespace "emptydir-7717" to be "Succeeded or Failed" Nov 13 05:24:58.880: INFO: Pod "pod-dedfea29-fe13-4caa-8990-be4935c5050a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.574975ms Nov 13 05:25:00.884: INFO: Pod "pod-dedfea29-fe13-4caa-8990-be4935c5050a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006212565s Nov 13 05:25:02.889: INFO: Pod "pod-dedfea29-fe13-4caa-8990-be4935c5050a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011718835s STEP: Saw pod success Nov 13 05:25:02.889: INFO: Pod "pod-dedfea29-fe13-4caa-8990-be4935c5050a" satisfied condition "Succeeded or Failed" Nov 13 05:25:02.893: INFO: Trying to get logs from node node2 pod pod-dedfea29-fe13-4caa-8990-be4935c5050a container test-container: STEP: delete the pod Nov 13 05:25:02.906: INFO: Waiting for pod pod-dedfea29-fe13-4caa-8990-be4935c5050a to disappear Nov 13 05:25:02.908: INFO: Pod pod-dedfea29-fe13-4caa-8990-be4935c5050a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:02.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7717" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":537,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:55.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6" Nov 13 05:25:03.556: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6 && dd if=/dev/zero of=/tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6/file] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:03.557: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:03.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:03.694: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:03.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6 && chmod o+rwx /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:03.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:04.057: INFO: Creating a PV followed by a PVC Nov 13 05:25:04.064: INFO: Waiting for PV local-pvt8j8h to bind to PVC pvc-th29x Nov 13 05:25:04.064: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-th29x] to have phase Bound Nov 13 05:25:04.067: INFO: PersistentVolumeClaim pvc-th29x found but phase is Pending instead of Bound. Nov 13 05:25:06.069: INFO: PersistentVolumeClaim pvc-th29x found and phase=Bound (2.005726197s) Nov 13 05:25:06.070: INFO: Waiting up to 3m0s for PersistentVolume local-pvt8j8h to have phase Bound Nov 13 05:25:06.072: INFO: PersistentVolume local-pvt8j8h found and phase=Bound (2.717928ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:25:10.103: INFO: pod "pod-331d9e95-a27f-4cc5-970a-dee0a1f31e3d" created on Node "node1" STEP: Writing in pod1 Nov 13 05:25:10.103: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2909 PodName:pod-331d9e95-a27f-4cc5-970a-dee0a1f31e3d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:10.104: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:10.188: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:25:10.188: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2909 PodName:pod-331d9e95-a27f-4cc5-970a-dee0a1f31e3d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:10.188: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:10.267: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-331d9e95-a27f-4cc5-970a-dee0a1f31e3d in namespace persistent-local-volumes-test-2909 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:25:10.272: INFO: Deleting PersistentVolumeClaim "pvc-th29x" Nov 13 05:25:10.276: INFO: Deleting PersistentVolume "local-pvt8j8h" Nov 13 05:25:10.280: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:10.280: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:10.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:10.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6/file Nov 13 05:25:10.467: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:10.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6 Nov 13 05:25:10.552: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cecdd47d-fe9b-48c4-9e7c-9de28de553d6] Namespace:persistent-local-volumes-test-2909 PodName:hostexec-node1-9j774 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:10.552: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:10.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2909" for this suite. • [SLOW TEST:15.152 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:02.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:25:02.977: INFO: The status of Pod test-hostpath-type-ckj5v is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:25:04.983: INFO: The status of Pod test-hostpath-type-ckj5v is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:25:06.981: INFO: The status of Pod test-hostpath-type-ckj5v is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:13.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-3772" for this suite. • [SLOW TEST:10.100 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":10,"skipped":548,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:10.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:25:14.756: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2300 PodName:hostexec-node1-t5n5h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.756: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.857: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:25:14.857: INFO: exec node1: stdout: "0\n" Nov 13 05:25:14.857: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:25:14.857: INFO: exec node1: exit code: 0 Nov 13 05:25:14.857: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2300" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.163 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:45.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:25:15.040: INFO: Deleting pod "pv-5074"/"pod-ephm-test-projected-qqzx" Nov 13 05:25:15.040: INFO: Deleting pod "pod-ephm-test-projected-qqzx" in namespace "pv-5074" Nov 13 05:25:15.046: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-qqzx" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:23.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5074" for this suite. • [SLOW TEST:38.053 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":10,"skipped":308,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:37.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:24:59.393: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-be0d2061-77a6-4905-85d9-e6327b106b37] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.393: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:59.498: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b57c325b-8072-4da2-8c3e-6eb5d8527356] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.498: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:59.657: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ef1faca1-89cf-4e02-9d32-a511e3818c09] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.657: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:59.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ad4e1f4d-e47e-431d-be89-8d04bd5c4266] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.853: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:24:59.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7d69da1d-03bd-487f-a679-7fd7220df679] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:24:59.949: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:00.129: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3bf51239-0c3c-4daa-9a5d-1a3fcf02d28f] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:00.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:00.253: INFO: Creating a PV followed by a PVC Nov 13 05:25:00.259: INFO: Creating a PV followed by a PVC Nov 13 05:25:00.265: INFO: Creating a PV followed by a PVC Nov 13 05:25:00.272: INFO: Creating a PV followed by a PVC Nov 13 05:25:00.279: INFO: Creating a PV followed by a PVC Nov 13 05:25:00.284: INFO: Creating a PV followed by a PVC Nov 13 05:25:10.328: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:25:14.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-441c3207-5e63-4686-9671-b8b311579c53] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.350: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ac6d6878-e61e-4ff6-acb7-80d27ad801af] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.481: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.583: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-37f002da-3889-4607-a63f-e88bf88e7b45] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.583: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-668e4f59-a33f-4e49-87fc-cdcc4f86b2ea] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.690: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.781: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ed19f042-c4cb-47ce-8e9e-e9ef61401eaa] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.781: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:14.861: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5bc4d3a8-e474-4069-8308-d70bc065607b] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:14.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:14.943: INFO: Creating a PV followed by a PVC Nov 13 05:25:14.949: INFO: Creating a PV followed by a PVC Nov 13 05:25:14.954: INFO: Creating a PV followed by a PVC Nov 13 05:25:14.961: INFO: Creating a PV followed by a PVC Nov 13 05:25:14.967: INFO: Creating a PV followed by a PVC Nov 13 05:25:14.972: INFO: Creating a PV followed by a PVC Nov 13 05:25:25.013: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Nov 13 05:25:25.013: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:25:25.014: INFO: Deleting PersistentVolumeClaim "pvc-g8624" Nov 13 05:25:25.017: INFO: Deleting PersistentVolume "local-pvkvsrh" STEP: Cleaning up PVC and PV Nov 13 05:25:25.020: INFO: Deleting PersistentVolumeClaim "pvc-v246d" Nov 13 05:25:25.024: INFO: Deleting PersistentVolume "local-pv2mlsd" STEP: Cleaning up PVC and PV Nov 13 05:25:25.028: INFO: Deleting PersistentVolumeClaim "pvc-4s7pl" Nov 13 05:25:25.031: INFO: Deleting PersistentVolume "local-pvggfct" STEP: Cleaning up PVC and PV Nov 13 05:25:25.035: INFO: Deleting PersistentVolumeClaim "pvc-dldm9" Nov 13 05:25:25.039: INFO: Deleting PersistentVolume "local-pvfhf2c" STEP: Cleaning up PVC and PV Nov 13 05:25:25.043: INFO: Deleting PersistentVolumeClaim "pvc-5ts4n" Nov 13 05:25:25.047: INFO: Deleting PersistentVolume "local-pvd5xm7" STEP: Cleaning up PVC and PV Nov 13 05:25:25.051: INFO: Deleting PersistentVolumeClaim "pvc-phn72" Nov 13 05:25:25.054: INFO: Deleting PersistentVolume "local-pv2grjw" STEP: Removing the test directory Nov 13 05:25:25.057: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-441c3207-5e63-4686-9671-b8b311579c53] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.157: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ac6d6878-e61e-4ff6-acb7-80d27ad801af] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.276: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-37f002da-3889-4607-a63f-e88bf88e7b45] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-668e4f59-a33f-4e49-87fc-cdcc4f86b2ea] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.468: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ed19f042-c4cb-47ce-8e9e-e9ef61401eaa] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.549: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5bc4d3a8-e474-4069-8308-d70bc065607b] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node2-77rcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:25:25.631: INFO: Deleting PersistentVolumeClaim "pvc-6djkj" Nov 13 05:25:25.635: INFO: Deleting PersistentVolume "local-pvjcxs5" STEP: Cleaning up PVC and PV Nov 13 05:25:25.639: INFO: Deleting PersistentVolumeClaim "pvc-sj5zj" Nov 13 05:25:25.644: INFO: Deleting PersistentVolume "local-pvbc62d" STEP: Cleaning up PVC and PV Nov 13 05:25:25.648: INFO: Deleting PersistentVolumeClaim "pvc-mvr69" Nov 13 05:25:25.652: INFO: Deleting PersistentVolume "local-pvb2gjz" STEP: Cleaning up PVC and PV Nov 13 05:25:25.655: INFO: Deleting PersistentVolumeClaim "pvc-sv2z4" Nov 13 05:25:25.659: INFO: Deleting PersistentVolume "local-pvt47gs" STEP: Cleaning up PVC and PV Nov 13 05:25:25.662: INFO: Deleting PersistentVolumeClaim "pvc-d7zk7" Nov 13 05:25:25.666: INFO: Deleting PersistentVolume "local-pvc8skd" STEP: Cleaning up PVC and PV Nov 13 05:25:25.669: INFO: Deleting PersistentVolumeClaim "pvc-zqj2n" Nov 13 05:25:25.674: INFO: Deleting PersistentVolume "local-pvzcfsx" STEP: Removing the test directory Nov 13 05:25:25.679: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-be0d2061-77a6-4905-85d9-e6327b106b37] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b57c325b-8072-4da2-8c3e-6eb5d8527356] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:25.921: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ef1faca1-89cf-4e02-9d32-a511e3818c09] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:26.005: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad4e1f4d-e47e-431d-be89-8d04bd5c4266] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:26.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:26.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7d69da1d-03bd-487f-a679-7fd7220df679] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:26.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:26.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3bf51239-0c3c-4daa-9a5d-1a3fcf02d28f] Namespace:persistent-local-volumes-test-6128 PodName:hostexec-node1-dk8rt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:26.194: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:26.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6128" for this suite. S [SKIPPING] [48.947 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:13.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03" Nov 13 05:25:17.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03" "/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03"] Namespace:persistent-local-volumes-test-5127 PodName:hostexec-node1-8fg6t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:17.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:17.247: INFO: Creating a PV followed by a PVC Nov 13 05:25:17.254: INFO: Waiting for PV local-pv2q8w5 to bind to PVC pvc-gw72b Nov 13 05:25:17.254: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gw72b] to have phase Bound Nov 13 05:25:17.256: INFO: PersistentVolumeClaim pvc-gw72b found but phase is Pending instead of Bound. Nov 13 05:25:19.258: INFO: PersistentVolumeClaim pvc-gw72b found but phase is Pending instead of Bound. Nov 13 05:25:21.261: INFO: PersistentVolumeClaim pvc-gw72b found but phase is Pending instead of Bound. Nov 13 05:25:23.265: INFO: PersistentVolumeClaim pvc-gw72b found but phase is Pending instead of Bound. Nov 13 05:25:25.269: INFO: PersistentVolumeClaim pvc-gw72b found but phase is Pending instead of Bound. Nov 13 05:25:27.273: INFO: PersistentVolumeClaim pvc-gw72b found and phase=Bound (10.019339164s) Nov 13 05:25:27.273: INFO: Waiting up to 3m0s for PersistentVolume local-pv2q8w5 to have phase Bound Nov 13 05:25:27.275: INFO: PersistentVolume local-pv2q8w5 found and phase=Bound (1.773518ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:25:27.278: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:25:27.280: INFO: Deleting PersistentVolumeClaim "pvc-gw72b" Nov 13 05:25:27.283: INFO: Deleting PersistentVolume "local-pv2q8w5" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03" Nov 13 05:25:27.286: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03"] Namespace:persistent-local-volumes-test-5127 PodName:hostexec-node1-8fg6t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:27.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:25:27.753: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-31e4bb7c-b7a3-4ea4-b4d8-56e5be93bc03] Namespace:persistent-local-volumes-test-5127 PodName:hostexec-node1-8fg6t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:27.753: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:28.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5127" for this suite. S [SKIPPING] [15.362 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:23.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0" Nov 13 05:25:25.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0 && dd if=/dev/zero of=/tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0/file] Namespace:persistent-local-volumes-test-6658 PodName:hostexec-node2-txgg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.115: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:25.227: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6658 PodName:hostexec-node2-txgg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:25.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:25.325: INFO: Creating a PV followed by a PVC Nov 13 05:25:25.332: INFO: Waiting for PV local-pvdsvzj to bind to PVC pvc-zw5vg Nov 13 05:25:25.332: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zw5vg] to have phase Bound Nov 13 05:25:25.335: INFO: PersistentVolumeClaim pvc-zw5vg found but phase is Pending instead of Bound. Nov 13 05:25:27.338: INFO: PersistentVolumeClaim pvc-zw5vg found and phase=Bound (2.005933086s) Nov 13 05:25:27.338: INFO: Waiting up to 3m0s for PersistentVolume local-pvdsvzj to have phase Bound Nov 13 05:25:27.341: INFO: PersistentVolume local-pvdsvzj found and phase=Bound (2.280849ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:25:27.345: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:25:27.346: INFO: Deleting PersistentVolumeClaim "pvc-zw5vg" Nov 13 05:25:27.350: INFO: Deleting PersistentVolume "local-pvdsvzj" Nov 13 05:25:27.355: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6658 PodName:hostexec-node2-txgg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:27.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0/file Nov 13 05:25:28.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6658 PodName:hostexec-node2-txgg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:28.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0 Nov 13 05:25:28.416: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-667220fe-e4f2-43dc-8dbe-e92ae5db74a0] Namespace:persistent-local-volumes-test-6658 PodName:hostexec-node2-txgg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:28.416: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:28.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6658" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [5.458 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:19.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-394 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:24:19.135: INFO: creating *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-attacher Nov 13 05:24:19.138: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-394 Nov 13 05:24:19.138: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-394 Nov 13 05:24:19.141: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-394 Nov 13 05:24:19.144: INFO: creating *v1.Role: csi-mock-volumes-394-3149/external-attacher-cfg-csi-mock-volumes-394 Nov 13 05:24:19.146: INFO: creating *v1.RoleBinding: csi-mock-volumes-394-3149/csi-attacher-role-cfg Nov 13 05:24:19.149: INFO: creating *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-provisioner Nov 13 05:24:19.151: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-394 Nov 13 05:24:19.151: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-394 Nov 13 05:24:19.154: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-394 Nov 13 05:24:19.157: INFO: creating *v1.Role: csi-mock-volumes-394-3149/external-provisioner-cfg-csi-mock-volumes-394 Nov 13 05:24:19.160: INFO: creating *v1.RoleBinding: csi-mock-volumes-394-3149/csi-provisioner-role-cfg Nov 13 05:24:19.163: INFO: creating *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-resizer Nov 13 05:24:19.166: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-394 Nov 13 05:24:19.166: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-394 Nov 13 05:24:19.170: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-394 Nov 13 05:24:19.173: INFO: creating *v1.Role: csi-mock-volumes-394-3149/external-resizer-cfg-csi-mock-volumes-394 Nov 13 05:24:19.175: INFO: creating *v1.RoleBinding: csi-mock-volumes-394-3149/csi-resizer-role-cfg Nov 13 05:24:19.179: INFO: creating *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-snapshotter Nov 13 05:24:19.182: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-394 Nov 13 05:24:19.182: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-394 Nov 13 05:24:19.184: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-394 Nov 13 05:24:19.187: INFO: creating *v1.Role: csi-mock-volumes-394-3149/external-snapshotter-leaderelection-csi-mock-volumes-394 Nov 13 05:24:19.190: INFO: creating *v1.RoleBinding: csi-mock-volumes-394-3149/external-snapshotter-leaderelection Nov 13 05:24:19.193: INFO: creating *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-mock Nov 13 05:24:19.195: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-394 Nov 13 05:24:19.198: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-394 Nov 13 05:24:19.201: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-394 Nov 13 05:24:19.203: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-394 Nov 13 05:24:19.206: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-394 Nov 13 05:24:19.209: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-394 Nov 13 05:24:19.211: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-394 Nov 13 05:24:19.213: INFO: creating *v1.StatefulSet: csi-mock-volumes-394-3149/csi-mockplugin Nov 13 05:24:19.217: INFO: creating *v1.StatefulSet: csi-mock-volumes-394-3149/csi-mockplugin-attacher Nov 13 05:24:19.221: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-394 to register on node node2 STEP: Creating pod Nov 13 05:24:28.739: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:24:28.743: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5nslw] to have phase Bound Nov 13 05:24:28.745: INFO: PersistentVolumeClaim pvc-5nslw found but phase is Pending instead of Bound. Nov 13 05:24:30.748: INFO: PersistentVolumeClaim pvc-5nslw found and phase=Bound (2.005154392s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-p4hqz Nov 13 05:24:38.777: INFO: Deleting pod "pvc-volume-tester-p4hqz" in namespace "csi-mock-volumes-394" Nov 13 05:24:38.782: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p4hqz" to be fully deleted STEP: Deleting claim pvc-5nslw Nov 13 05:24:52.796: INFO: Waiting up to 2m0s for PersistentVolume pvc-227696d3-ddeb-4b7c-9283-e36ce84f5cf1 to get deleted Nov 13 05:24:52.799: INFO: PersistentVolume pvc-227696d3-ddeb-4b7c-9283-e36ce84f5cf1 found and phase=Bound (2.551791ms) Nov 13 05:24:54.804: INFO: PersistentVolume pvc-227696d3-ddeb-4b7c-9283-e36ce84f5cf1 was removed STEP: Deleting storageclass csi-mock-volumes-394-scnddff STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-394 STEP: Waiting for namespaces [csi-mock-volumes-394] to vanish STEP: uninstalling csi mock driver Nov 13 05:25:00.817: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-attacher Nov 13 05:25:00.821: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-394 Nov 13 05:25:00.824: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-394 Nov 13 05:25:00.828: INFO: deleting *v1.Role: csi-mock-volumes-394-3149/external-attacher-cfg-csi-mock-volumes-394 Nov 13 05:25:00.832: INFO: deleting *v1.RoleBinding: csi-mock-volumes-394-3149/csi-attacher-role-cfg Nov 13 05:25:00.836: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-provisioner Nov 13 05:25:00.839: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-394 Nov 13 05:25:00.842: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-394 Nov 13 05:25:00.845: INFO: deleting *v1.Role: csi-mock-volumes-394-3149/external-provisioner-cfg-csi-mock-volumes-394 Nov 13 05:25:00.848: INFO: deleting *v1.RoleBinding: csi-mock-volumes-394-3149/csi-provisioner-role-cfg Nov 13 05:25:00.852: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-resizer Nov 13 05:25:00.855: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-394 Nov 13 05:25:00.858: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-394 Nov 13 05:25:00.861: INFO: deleting *v1.Role: csi-mock-volumes-394-3149/external-resizer-cfg-csi-mock-volumes-394 Nov 13 05:25:00.864: INFO: deleting *v1.RoleBinding: csi-mock-volumes-394-3149/csi-resizer-role-cfg Nov 13 05:25:00.868: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-snapshotter Nov 13 05:25:00.872: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-394 Nov 13 05:25:00.875: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-394 Nov 13 05:25:00.878: INFO: deleting *v1.Role: csi-mock-volumes-394-3149/external-snapshotter-leaderelection-csi-mock-volumes-394 Nov 13 05:25:00.882: INFO: deleting *v1.RoleBinding: csi-mock-volumes-394-3149/external-snapshotter-leaderelection Nov 13 05:25:00.885: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-394-3149/csi-mock Nov 13 05:25:00.888: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-394 Nov 13 05:25:00.892: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-394 Nov 13 05:25:00.896: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-394 Nov 13 05:25:00.899: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-394 Nov 13 05:25:00.902: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-394 Nov 13 05:25:00.905: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-394 Nov 13 05:25:00.908: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-394 Nov 13 05:25:00.912: INFO: deleting *v1.StatefulSet: csi-mock-volumes-394-3149/csi-mockplugin Nov 13 05:25:00.916: INFO: deleting *v1.StatefulSet: csi-mock-volumes-394-3149/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-394-3149 STEP: Waiting for namespaces [csi-mock-volumes-394-3149] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:28.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:69.855 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":7,"skipped":301,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:28.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:25:28.657: INFO: The status of Pod test-hostpath-type-5hlcl is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:25:30.661: INFO: The status of Pod test-hostpath-type-5hlcl is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:25:32.661: INFO: The status of Pod test-hostpath-type-5hlcl is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:25:32.664: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4465 PodName:test-hostpath-type-5hlcl ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:32.664: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:34.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4465" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":11,"skipped":353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:34.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:25:38.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036-backend && ln -s /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036-backend /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036] Namespace:persistent-local-volumes-test-3989 PodName:hostexec-node2-7c88j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:38.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:39.011: INFO: Creating a PV followed by a PVC Nov 13 05:25:39.018: INFO: Waiting for PV local-pvtn65j to bind to PVC pvc-l8wkj Nov 13 05:25:39.018: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-l8wkj] to have phase Bound Nov 13 05:25:39.020: INFO: PersistentVolumeClaim pvc-l8wkj found but phase is Pending instead of Bound. Nov 13 05:25:41.023: INFO: PersistentVolumeClaim pvc-l8wkj found but phase is Pending instead of Bound. Nov 13 05:25:43.028: INFO: PersistentVolumeClaim pvc-l8wkj found and phase=Bound (4.009855859s) Nov 13 05:25:43.028: INFO: Waiting up to 3m0s for PersistentVolume local-pvtn65j to have phase Bound Nov 13 05:25:43.031: INFO: PersistentVolume local-pvtn65j found and phase=Bound (2.332928ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:25:47.062: INFO: pod "pod-46d61ccb-f6c8-4045-896a-45a972a6f494" created on Node "node2" STEP: Writing in pod1 Nov 13 05:25:47.062: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3989 PodName:pod-46d61ccb-f6c8-4045-896a-45a972a6f494 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:47.062: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:47.145: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:25:47.145: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3989 PodName:pod-46d61ccb-f6c8-4045-896a-45a972a6f494 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:47.145: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:47.231: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:25:47.231: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3989 PodName:pod-46d61ccb-f6c8-4045-896a-45a972a6f494 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:47.231: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:47.302: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-46d61ccb-f6c8-4045-896a-45a972a6f494 in namespace persistent-local-volumes-test-3989 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:25:47.308: INFO: Deleting PersistentVolumeClaim "pvc-l8wkj" Nov 13 05:25:47.312: INFO: Deleting PersistentVolume "local-pvtn65j" STEP: Removing the test directory Nov 13 05:25:47.316: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036 && rm -r /tmp/local-volume-test-847503b1-89dc-4dd0-ac41-8bf9faa9e036-backend] Namespace:persistent-local-volumes-test-3989 PodName:hostexec-node2-7c88j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:47.316: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:47.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3989" for this suite. • [SLOW TEST:12.579 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:28.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:25:32.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-137992e6-5ac2-44e6-b22d-3443e46ebd40] Namespace:persistent-local-volumes-test-7249 PodName:hostexec-node2-vjrgm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:32.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:25:32.720: INFO: Creating a PV followed by a PVC Nov 13 05:25:32.728: INFO: Waiting for PV local-pv4lh4j to bind to PVC pvc-2c8kn Nov 13 05:25:32.728: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2c8kn] to have phase Bound Nov 13 05:25:32.730: INFO: PersistentVolumeClaim pvc-2c8kn found but phase is Pending instead of Bound. Nov 13 05:25:34.735: INFO: PersistentVolumeClaim pvc-2c8kn found but phase is Pending instead of Bound. Nov 13 05:25:36.740: INFO: PersistentVolumeClaim pvc-2c8kn found but phase is Pending instead of Bound. Nov 13 05:25:38.745: INFO: PersistentVolumeClaim pvc-2c8kn found but phase is Pending instead of Bound. Nov 13 05:25:40.749: INFO: PersistentVolumeClaim pvc-2c8kn found but phase is Pending instead of Bound. Nov 13 05:25:42.753: INFO: PersistentVolumeClaim pvc-2c8kn found and phase=Bound (10.025178387s) Nov 13 05:25:42.753: INFO: Waiting up to 3m0s for PersistentVolume local-pv4lh4j to have phase Bound Nov 13 05:25:42.756: INFO: PersistentVolume local-pv4lh4j found and phase=Bound (2.826894ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:25:46.788: INFO: pod "pod-7665c200-728a-4d72-9f5c-dc928e515f15" created on Node "node2" STEP: Writing in pod1 Nov 13 05:25:46.789: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7249 PodName:pod-7665c200-728a-4d72-9f5c-dc928e515f15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:46.789: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:46.877: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:25:46.877: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7249 PodName:pod-7665c200-728a-4d72-9f5c-dc928e515f15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:46.877: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:46.955: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:25:50.976: INFO: pod "pod-e6c0a92b-3f44-4b5a-87d2-7ab7b5a6455a" created on Node "node2" Nov 13 05:25:50.976: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7249 PodName:pod-e6c0a92b-3f44-4b5a-87d2-7ab7b5a6455a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:50.976: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:51.544: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:25:51.544: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-137992e6-5ac2-44e6-b22d-3443e46ebd40 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7249 PodName:pod-e6c0a92b-3f44-4b5a-87d2-7ab7b5a6455a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:51.544: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:52.067: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-137992e6-5ac2-44e6-b22d-3443e46ebd40 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:25:52.067: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7249 PodName:pod-7665c200-728a-4d72-9f5c-dc928e515f15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:25:52.067: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:25:52.311: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-137992e6-5ac2-44e6-b22d-3443e46ebd40", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-7665c200-728a-4d72-9f5c-dc928e515f15 in namespace persistent-local-volumes-test-7249 STEP: Deleting pod2 STEP: Deleting pod pod-e6c0a92b-3f44-4b5a-87d2-7ab7b5a6455a in namespace persistent-local-volumes-test-7249 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:25:52.322: INFO: Deleting PersistentVolumeClaim "pvc-2c8kn" Nov 13 05:25:52.326: INFO: Deleting PersistentVolume "local-pv4lh4j" STEP: Removing the test directory Nov 13 05:25:52.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-137992e6-5ac2-44e6-b22d-3443e46ebd40] Namespace:persistent-local-volumes-test-7249 PodName:hostexec-node2-vjrgm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:25:52.330: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:25:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7249" for this suite. • [SLOW TEST:23.873 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":626,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:39.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-6419 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:24:39.885: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-attacher Nov 13 05:24:39.888: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6419 Nov 13 05:24:39.888: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6419 Nov 13 05:24:39.891: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6419 Nov 13 05:24:39.894: INFO: creating *v1.Role: csi-mock-volumes-6419-900/external-attacher-cfg-csi-mock-volumes-6419 Nov 13 05:24:39.896: INFO: creating *v1.RoleBinding: csi-mock-volumes-6419-900/csi-attacher-role-cfg Nov 13 05:24:39.899: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-provisioner Nov 13 05:24:39.903: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6419 Nov 13 05:24:39.904: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6419 Nov 13 05:24:39.907: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6419 Nov 13 05:24:39.910: INFO: creating *v1.Role: csi-mock-volumes-6419-900/external-provisioner-cfg-csi-mock-volumes-6419 Nov 13 05:24:39.912: INFO: creating *v1.RoleBinding: csi-mock-volumes-6419-900/csi-provisioner-role-cfg Nov 13 05:24:39.915: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-resizer Nov 13 05:24:39.918: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6419 Nov 13 05:24:39.918: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6419 Nov 13 05:24:39.920: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6419 Nov 13 05:24:39.923: INFO: creating *v1.Role: csi-mock-volumes-6419-900/external-resizer-cfg-csi-mock-volumes-6419 Nov 13 05:24:39.926: INFO: creating *v1.RoleBinding: csi-mock-volumes-6419-900/csi-resizer-role-cfg Nov 13 05:24:39.929: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-snapshotter Nov 13 05:24:39.932: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6419 Nov 13 05:24:39.932: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6419 Nov 13 05:24:39.934: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6419 Nov 13 05:24:39.938: INFO: creating *v1.Role: csi-mock-volumes-6419-900/external-snapshotter-leaderelection-csi-mock-volumes-6419 Nov 13 05:24:39.941: INFO: creating *v1.RoleBinding: csi-mock-volumes-6419-900/external-snapshotter-leaderelection Nov 13 05:24:39.944: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-mock Nov 13 05:24:39.947: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6419 Nov 13 05:24:39.950: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6419 Nov 13 05:24:39.953: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6419 Nov 13 05:24:39.956: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6419 Nov 13 05:24:39.958: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6419 Nov 13 05:24:39.961: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6419 Nov 13 05:24:39.963: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6419 Nov 13 05:24:39.966: INFO: creating *v1.StatefulSet: csi-mock-volumes-6419-900/csi-mockplugin Nov 13 05:24:39.971: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6419 Nov 13 05:24:39.973: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6419" Nov 13 05:24:39.975: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6419 to register on node node1 I1113 05:25:02.111369 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6419","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:25:02.221288 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:25:02.222745 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6419","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:25:02.224308 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:25:02.228280 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:25:02.315956 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6419"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:25:06.370: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:25:06.374: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-zwshm] to have phase Bound Nov 13 05:25:06.377: INFO: PersistentVolumeClaim pvc-zwshm found but phase is Pending instead of Bound. I1113 05:25:06.382023 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4"}}},"Error":"","FullError":null} Nov 13 05:25:08.380: INFO: PersistentVolumeClaim pvc-zwshm found and phase=Bound (2.005689866s) Nov 13 05:25:08.395: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-zwshm] to have phase Bound Nov 13 05:25:08.397: INFO: PersistentVolumeClaim pvc-zwshm found and phase=Bound (2.03871ms) STEP: Waiting for expected CSI calls I1113 05:25:08.604205 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:25:08.606264 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4","storage.kubernetes.io/csiProvisionerIdentity":"1636781102227-8081-csi-mock-csi-mock-volumes-6419"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:25:09.111079 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:25:09.119568 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4","storage.kubernetes.io/csiProvisionerIdentity":"1636781102227-8081-csi-mock-csi-mock-volumes-6419"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Nov 13 05:25:09.398: INFO: Deleting pod "pvc-volume-tester-jhb4f" in namespace "csi-mock-volumes-6419" Nov 13 05:25:09.403: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jhb4f" to be fully deleted I1113 05:25:10.123987 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:25:10.125921 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4","storage.kubernetes.io/csiProvisionerIdentity":"1636781102227-8081-csi-mock-csi-mock-volumes-6419"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:25:12.141299 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:25:12.143408 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4","storage.kubernetes.io/csiProvisionerIdentity":"1636781102227-8081-csi-mock-csi-mock-volumes-6419"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-jhb4f Nov 13 05:25:14.410: INFO: Deleting pod "pvc-volume-tester-jhb4f" in namespace "csi-mock-volumes-6419" STEP: Deleting claim pvc-zwshm Nov 13 05:25:14.421: INFO: Waiting up to 2m0s for PersistentVolume pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4 to get deleted Nov 13 05:25:14.424: INFO: PersistentVolume pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4 found and phase=Bound (2.493718ms) I1113 05:25:14.436806 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:25:16.428: INFO: PersistentVolume pvc-3d89ef75-7a42-46ab-8ce1-6895d2778bc4 was removed STEP: Deleting storageclass csi-mock-volumes-6419-scw4sk2 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6419 STEP: Waiting for namespaces [csi-mock-volumes-6419] to vanish STEP: uninstalling csi mock driver Nov 13 05:25:22.468: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-attacher Nov 13 05:25:22.472: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6419 Nov 13 05:25:22.476: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6419 Nov 13 05:25:22.480: INFO: deleting *v1.Role: csi-mock-volumes-6419-900/external-attacher-cfg-csi-mock-volumes-6419 Nov 13 05:25:22.483: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6419-900/csi-attacher-role-cfg Nov 13 05:25:22.487: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-provisioner Nov 13 05:25:22.491: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6419 Nov 13 05:25:22.494: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6419 Nov 13 05:25:22.498: INFO: deleting *v1.Role: csi-mock-volumes-6419-900/external-provisioner-cfg-csi-mock-volumes-6419 Nov 13 05:25:22.501: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6419-900/csi-provisioner-role-cfg Nov 13 05:25:22.504: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-resizer Nov 13 05:25:22.507: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6419 Nov 13 05:25:22.513: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6419 Nov 13 05:25:22.517: INFO: deleting *v1.Role: csi-mock-volumes-6419-900/external-resizer-cfg-csi-mock-volumes-6419 Nov 13 05:25:22.525: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6419-900/csi-resizer-role-cfg Nov 13 05:25:22.529: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-snapshotter Nov 13 05:25:22.536: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6419 Nov 13 05:25:22.539: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6419 Nov 13 05:25:22.543: INFO: deleting *v1.Role: csi-mock-volumes-6419-900/external-snapshotter-leaderelection-csi-mock-volumes-6419 Nov 13 05:25:22.547: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6419-900/external-snapshotter-leaderelection Nov 13 05:25:22.552: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6419-900/csi-mock Nov 13 05:25:22.556: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6419 Nov 13 05:25:22.559: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6419 Nov 13 05:25:22.563: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6419 Nov 13 05:25:22.566: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6419 Nov 13 05:25:22.570: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6419 Nov 13 05:25:22.573: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6419 Nov 13 05:25:22.576: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6419 Nov 13 05:25:22.580: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6419-900/csi-mockplugin Nov 13 05:25:22.584: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6419 STEP: deleting the driver namespace: csi-mock-volumes-6419-900 STEP: Waiting for namespaces [csi-mock-volumes-6419-900] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:00.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.777 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":6,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:00.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:26:00.674: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:00.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9044" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:00.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-fc860e43-3fff-46e7-929b-b510ce27c117 STEP: Creating a pod to test consume configMaps Nov 13 05:26:00.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2" in namespace "configmap-4887" to be "Succeeded or Failed" Nov 13 05:26:00.822: INFO: Pod "pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.921613ms Nov 13 05:26:02.827: INFO: Pod "pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006822869s Nov 13 05:26:04.830: INFO: Pod "pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010084293s STEP: Saw pod success Nov 13 05:26:04.830: INFO: Pod "pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2" satisfied condition "Succeeded or Failed" Nov 13 05:26:04.833: INFO: Trying to get logs from node node2 pod pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2 container agnhost-container: STEP: delete the pod Nov 13 05:26:04.849: INFO: Waiting for pod pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2 to disappear Nov 13 05:26:04.851: INFO: Pod pod-configmaps-2022a236-e1fd-4dbd-a37e-532cab0b1ba2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:04.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4887" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":303,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:26.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-8124 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:25:26.518: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-attacher Nov 13 05:25:26.521: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8124 Nov 13 05:25:26.521: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8124 Nov 13 05:25:26.524: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8124 Nov 13 05:25:26.527: INFO: creating *v1.Role: csi-mock-volumes-8124-5700/external-attacher-cfg-csi-mock-volumes-8124 Nov 13 05:25:26.530: INFO: creating *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-attacher-role-cfg Nov 13 05:25:26.532: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-provisioner Nov 13 05:25:26.535: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8124 Nov 13 05:25:26.535: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8124 Nov 13 05:25:26.537: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8124 Nov 13 05:25:26.540: INFO: creating *v1.Role: csi-mock-volumes-8124-5700/external-provisioner-cfg-csi-mock-volumes-8124 Nov 13 05:25:26.543: INFO: creating *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-provisioner-role-cfg Nov 13 05:25:26.546: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-resizer Nov 13 05:25:26.549: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8124 Nov 13 05:25:26.549: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8124 Nov 13 05:25:26.551: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8124 Nov 13 05:25:26.553: INFO: creating *v1.Role: csi-mock-volumes-8124-5700/external-resizer-cfg-csi-mock-volumes-8124 Nov 13 05:25:26.556: INFO: creating *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-resizer-role-cfg Nov 13 05:25:26.559: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-snapshotter Nov 13 05:25:26.561: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8124 Nov 13 05:25:26.561: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8124 Nov 13 05:25:26.565: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8124 Nov 13 05:25:26.567: INFO: creating *v1.Role: csi-mock-volumes-8124-5700/external-snapshotter-leaderelection-csi-mock-volumes-8124 Nov 13 05:25:26.570: INFO: creating *v1.RoleBinding: csi-mock-volumes-8124-5700/external-snapshotter-leaderelection Nov 13 05:25:26.572: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-mock Nov 13 05:25:26.575: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8124 Nov 13 05:25:26.577: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8124 Nov 13 05:25:26.580: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8124 Nov 13 05:25:26.582: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8124 Nov 13 05:25:26.585: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8124 Nov 13 05:25:26.587: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8124 Nov 13 05:25:26.589: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8124 Nov 13 05:25:26.592: INFO: creating *v1.StatefulSet: csi-mock-volumes-8124-5700/csi-mockplugin Nov 13 05:25:26.595: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8124 Nov 13 05:25:26.599: INFO: creating *v1.StatefulSet: csi-mock-volumes-8124-5700/csi-mockplugin-attacher Nov 13 05:25:26.603: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8124" Nov 13 05:25:26.605: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8124 to register on node node1 STEP: Creating pod Nov 13 05:25:31.620: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:25:31.624: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f4c97] to have phase Bound Nov 13 05:25:31.626: INFO: PersistentVolumeClaim pvc-f4c97 found but phase is Pending instead of Bound. Nov 13 05:25:33.631: INFO: PersistentVolumeClaim pvc-f4c97 found and phase=Bound (2.006426744s) STEP: Deleting the previously created pod Nov 13 05:25:41.653: INFO: Deleting pod "pvc-volume-tester-854n5" in namespace "csi-mock-volumes-8124" Nov 13 05:25:41.659: INFO: Wait up to 5m0s for pod "pvc-volume-tester-854n5" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:25:47.680: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/579396b1-1382-492e-8173-73ab62ad5f0c/volumes/kubernetes.io~csi/pvc-9f38ff2f-2ec1-4968-a67a-b9f56a10f78b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-854n5 Nov 13 05:25:47.680: INFO: Deleting pod "pvc-volume-tester-854n5" in namespace "csi-mock-volumes-8124" STEP: Deleting claim pvc-f4c97 Nov 13 05:25:47.687: INFO: Waiting up to 2m0s for PersistentVolume pvc-9f38ff2f-2ec1-4968-a67a-b9f56a10f78b to get deleted Nov 13 05:25:47.689: INFO: PersistentVolume pvc-9f38ff2f-2ec1-4968-a67a-b9f56a10f78b found and phase=Bound (1.8638ms) Nov 13 05:25:49.693: INFO: PersistentVolume pvc-9f38ff2f-2ec1-4968-a67a-b9f56a10f78b was removed STEP: Deleting storageclass csi-mock-volumes-8124-scfm4c6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8124 STEP: Waiting for namespaces [csi-mock-volumes-8124] to vanish STEP: uninstalling csi mock driver Nov 13 05:25:55.705: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-attacher Nov 13 05:25:55.708: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8124 Nov 13 05:25:55.712: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8124 Nov 13 05:25:55.715: INFO: deleting *v1.Role: csi-mock-volumes-8124-5700/external-attacher-cfg-csi-mock-volumes-8124 Nov 13 05:25:55.718: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-attacher-role-cfg Nov 13 05:25:55.722: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-provisioner Nov 13 05:25:55.725: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8124 Nov 13 05:25:55.733: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8124 Nov 13 05:25:55.736: INFO: deleting *v1.Role: csi-mock-volumes-8124-5700/external-provisioner-cfg-csi-mock-volumes-8124 Nov 13 05:25:55.742: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-provisioner-role-cfg Nov 13 05:25:55.748: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-resizer Nov 13 05:25:55.754: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8124 Nov 13 05:25:55.759: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8124 Nov 13 05:25:55.763: INFO: deleting *v1.Role: csi-mock-volumes-8124-5700/external-resizer-cfg-csi-mock-volumes-8124 Nov 13 05:25:55.766: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8124-5700/csi-resizer-role-cfg Nov 13 05:25:55.770: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-snapshotter Nov 13 05:25:55.774: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8124 Nov 13 05:25:55.778: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8124 Nov 13 05:25:55.782: INFO: deleting *v1.Role: csi-mock-volumes-8124-5700/external-snapshotter-leaderelection-csi-mock-volumes-8124 Nov 13 05:25:55.785: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8124-5700/external-snapshotter-leaderelection Nov 13 05:25:55.789: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8124-5700/csi-mock Nov 13 05:25:55.793: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8124 Nov 13 05:25:55.796: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8124 Nov 13 05:25:55.799: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8124 Nov 13 05:25:55.803: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8124 Nov 13 05:25:55.806: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8124 Nov 13 05:25:55.809: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8124 Nov 13 05:25:55.813: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8124 Nov 13 05:25:55.816: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8124-5700/csi-mockplugin Nov 13 05:25:55.820: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8124 Nov 13 05:25:55.823: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8124-5700/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8124-5700 STEP: Waiting for namespaces [csi-mock-volumes-8124-5700] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:07.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:41.394 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":10,"skipped":400,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:07.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:26:07.905: INFO: The status of Pod test-hostpath-type-j7ppp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:26:09.908: INFO: The status of Pod test-hostpath-type-j7ppp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:26:11.909: INFO: The status of Pod test-hostpath-type-j7ppp is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:26:11.912: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4747 PodName:test-hostpath-type-j7ppp ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:26:11.912: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:14.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4747" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":11,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:04.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a" Nov 13 05:26:06.948: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a && dd if=/dev/zero of=/tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a/file] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:06.948: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:07.072: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:07.072: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:07.168: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a && chmod o+rwx /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:07.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:26:07.415: INFO: Creating a PV followed by a PVC Nov 13 05:26:07.423: INFO: Waiting for PV local-pvgpvsp to bind to PVC pvc-fhkq4 Nov 13 05:26:07.423: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fhkq4] to have phase Bound Nov 13 05:26:07.426: INFO: PersistentVolumeClaim pvc-fhkq4 found but phase is Pending instead of Bound. Nov 13 05:26:09.429: INFO: PersistentVolumeClaim pvc-fhkq4 found but phase is Pending instead of Bound. Nov 13 05:26:11.433: INFO: PersistentVolumeClaim pvc-fhkq4 found but phase is Pending instead of Bound. Nov 13 05:26:13.437: INFO: PersistentVolumeClaim pvc-fhkq4 found and phase=Bound (6.014436409s) Nov 13 05:26:13.437: INFO: Waiting up to 3m0s for PersistentVolume local-pvgpvsp to have phase Bound Nov 13 05:26:13.440: INFO: PersistentVolume local-pvgpvsp found and phase=Bound (2.229062ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:26:17.469: INFO: pod "pod-8d9f845c-d426-4404-adc2-327128b8a268" created on Node "node2" STEP: Writing in pod1 Nov 13 05:26:17.470: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-714 PodName:pod-8d9f845c-d426-4404-adc2-327128b8a268 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:26:17.470: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:17.623: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:26:17.623: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-714 PodName:pod-8d9f845c-d426-4404-adc2-327128b8a268 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:26:17.624: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:17.701: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:26:17.702: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-714 PodName:pod-8d9f845c-d426-4404-adc2-327128b8a268 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:26:17.702: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:17.779: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-8d9f845c-d426-4404-adc2-327128b8a268 in namespace persistent-local-volumes-test-714 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:26:17.784: INFO: Deleting PersistentVolumeClaim "pvc-fhkq4" Nov 13 05:26:17.788: INFO: Deleting PersistentVolume "local-pvgpvsp" Nov 13 05:26:17.793: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:17.793: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:17.890: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:17.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a/file Nov 13 05:26:17.977: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:17.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a Nov 13 05:26:18.062: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ec81f20f-84ca-4d5e-b3f6-6522f21b6e5a] Namespace:persistent-local-volumes-test-714 PodName:hostexec-node2-gbkdw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:18.062: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:18.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-714" for this suite. • [SLOW TEST:13.287 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:18.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:26:18.314: INFO: The status of Pod test-hostpath-type-zlczn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:26:20.317: INFO: The status of Pod test-hostpath-type-zlczn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:26:22.318: INFO: The status of Pod test-hostpath-type-zlczn is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:30.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-9207" for this suite. • [SLOW TEST:12.100 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":9,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:15.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-6308 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:25:15.169: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-attacher Nov 13 05:25:15.172: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6308 Nov 13 05:25:15.172: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6308 Nov 13 05:25:15.175: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6308 Nov 13 05:25:15.178: INFO: creating *v1.Role: csi-mock-volumes-6308-483/external-attacher-cfg-csi-mock-volumes-6308 Nov 13 05:25:15.180: INFO: creating *v1.RoleBinding: csi-mock-volumes-6308-483/csi-attacher-role-cfg Nov 13 05:25:15.183: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-provisioner Nov 13 05:25:15.185: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6308 Nov 13 05:25:15.185: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6308 Nov 13 05:25:15.188: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6308 Nov 13 05:25:15.191: INFO: creating *v1.Role: csi-mock-volumes-6308-483/external-provisioner-cfg-csi-mock-volumes-6308 Nov 13 05:25:15.194: INFO: creating *v1.RoleBinding: csi-mock-volumes-6308-483/csi-provisioner-role-cfg Nov 13 05:25:15.196: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-resizer Nov 13 05:25:15.199: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6308 Nov 13 05:25:15.199: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6308 Nov 13 05:25:15.202: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6308 Nov 13 05:25:15.204: INFO: creating *v1.Role: csi-mock-volumes-6308-483/external-resizer-cfg-csi-mock-volumes-6308 Nov 13 05:25:15.208: INFO: creating *v1.RoleBinding: csi-mock-volumes-6308-483/csi-resizer-role-cfg Nov 13 05:25:15.211: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-snapshotter Nov 13 05:25:15.214: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6308 Nov 13 05:25:15.214: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6308 Nov 13 05:25:15.216: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6308 Nov 13 05:25:15.219: INFO: creating *v1.Role: csi-mock-volumes-6308-483/external-snapshotter-leaderelection-csi-mock-volumes-6308 Nov 13 05:25:15.221: INFO: creating *v1.RoleBinding: csi-mock-volumes-6308-483/external-snapshotter-leaderelection Nov 13 05:25:15.224: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-mock Nov 13 05:25:15.226: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6308 Nov 13 05:25:15.229: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6308 Nov 13 05:25:15.233: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6308 Nov 13 05:25:15.236: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6308 Nov 13 05:25:15.238: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6308 Nov 13 05:25:15.242: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6308 Nov 13 05:25:15.244: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6308 Nov 13 05:25:15.248: INFO: creating *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin Nov 13 05:25:15.252: INFO: creating *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin-attacher Nov 13 05:25:15.256: INFO: creating *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin-resizer Nov 13 05:25:15.259: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6308 to register on node node2 STEP: Creating pod Nov 13 05:25:24.781: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:25:24.785: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-m4hgz] to have phase Bound Nov 13 05:25:24.787: INFO: PersistentVolumeClaim pvc-m4hgz found but phase is Pending instead of Bound. Nov 13 05:25:26.792: INFO: PersistentVolumeClaim pvc-m4hgz found and phase=Bound (2.007300465s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 13 05:25:40.829: INFO: Deleting pod "pvc-volume-tester-bzdcb" in namespace "csi-mock-volumes-6308" Nov 13 05:25:40.835: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bzdcb" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-bzdcb Nov 13 05:25:58.864: INFO: Deleting pod "pvc-volume-tester-bzdcb" in namespace "csi-mock-volumes-6308" STEP: Deleting pod pvc-volume-tester-2rpvj Nov 13 05:25:58.866: INFO: Deleting pod "pvc-volume-tester-2rpvj" in namespace "csi-mock-volumes-6308" Nov 13 05:25:58.872: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2rpvj" to be fully deleted STEP: Deleting claim pvc-m4hgz Nov 13 05:26:12.887: INFO: Waiting up to 2m0s for PersistentVolume pvc-b1bb91a4-6661-4ba9-b8d2-0cced440aba2 to get deleted Nov 13 05:26:12.889: INFO: PersistentVolume pvc-b1bb91a4-6661-4ba9-b8d2-0cced440aba2 found and phase=Bound (2.049096ms) Nov 13 05:26:14.894: INFO: PersistentVolume pvc-b1bb91a4-6661-4ba9-b8d2-0cced440aba2 was removed STEP: Deleting storageclass csi-mock-volumes-6308-scx92d8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6308 STEP: Waiting for namespaces [csi-mock-volumes-6308] to vanish STEP: uninstalling csi mock driver Nov 13 05:26:20.906: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-attacher Nov 13 05:26:20.911: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6308 Nov 13 05:26:20.915: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6308 Nov 13 05:26:20.918: INFO: deleting *v1.Role: csi-mock-volumes-6308-483/external-attacher-cfg-csi-mock-volumes-6308 Nov 13 05:26:20.921: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6308-483/csi-attacher-role-cfg Nov 13 05:26:20.925: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-provisioner Nov 13 05:26:20.928: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6308 Nov 13 05:26:20.931: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6308 Nov 13 05:26:20.935: INFO: deleting *v1.Role: csi-mock-volumes-6308-483/external-provisioner-cfg-csi-mock-volumes-6308 Nov 13 05:26:20.941: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6308-483/csi-provisioner-role-cfg Nov 13 05:26:20.948: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-resizer Nov 13 05:26:20.959: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6308 Nov 13 05:26:20.964: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6308 Nov 13 05:26:20.968: INFO: deleting *v1.Role: csi-mock-volumes-6308-483/external-resizer-cfg-csi-mock-volumes-6308 Nov 13 05:26:20.971: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6308-483/csi-resizer-role-cfg Nov 13 05:26:20.974: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-snapshotter Nov 13 05:26:20.977: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6308 Nov 13 05:26:20.980: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6308 Nov 13 05:26:20.984: INFO: deleting *v1.Role: csi-mock-volumes-6308-483/external-snapshotter-leaderelection-csi-mock-volumes-6308 Nov 13 05:26:20.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6308-483/external-snapshotter-leaderelection Nov 13 05:26:20.990: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6308-483/csi-mock Nov 13 05:26:20.993: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6308 Nov 13 05:26:20.996: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6308 Nov 13 05:26:20.999: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6308 Nov 13 05:26:21.002: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6308 Nov 13 05:26:21.005: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6308 Nov 13 05:26:21.009: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6308 Nov 13 05:26:21.013: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6308 Nov 13 05:26:21.017: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin Nov 13 05:26:21.021: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin-attacher Nov 13 05:26:21.024: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6308-483/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6308-483 STEP: Waiting for namespaces [csi-mock-volumes-6308-483] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:49.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.932 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":5,"skipped":253,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:49.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:26:49.107: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:49.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7355" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:52.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-7414 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:25:52.534: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-attacher Nov 13 05:25:52.539: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7414 Nov 13 05:25:52.540: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7414 Nov 13 05:25:52.542: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7414 Nov 13 05:25:52.545: INFO: creating *v1.Role: csi-mock-volumes-7414-3570/external-attacher-cfg-csi-mock-volumes-7414 Nov 13 05:25:52.548: INFO: creating *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-attacher-role-cfg Nov 13 05:25:52.551: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-provisioner Nov 13 05:25:52.553: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7414 Nov 13 05:25:52.553: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7414 Nov 13 05:25:52.556: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7414 Nov 13 05:25:52.559: INFO: creating *v1.Role: csi-mock-volumes-7414-3570/external-provisioner-cfg-csi-mock-volumes-7414 Nov 13 05:25:52.561: INFO: creating *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-provisioner-role-cfg Nov 13 05:25:52.564: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-resizer Nov 13 05:25:52.567: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7414 Nov 13 05:25:52.567: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7414 Nov 13 05:25:52.569: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7414 Nov 13 05:25:52.572: INFO: creating *v1.Role: csi-mock-volumes-7414-3570/external-resizer-cfg-csi-mock-volumes-7414 Nov 13 05:25:52.577: INFO: creating *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-resizer-role-cfg Nov 13 05:25:52.579: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-snapshotter Nov 13 05:25:52.582: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7414 Nov 13 05:25:52.582: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7414 Nov 13 05:25:52.584: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7414 Nov 13 05:25:52.587: INFO: creating *v1.Role: csi-mock-volumes-7414-3570/external-snapshotter-leaderelection-csi-mock-volumes-7414 Nov 13 05:25:52.590: INFO: creating *v1.RoleBinding: csi-mock-volumes-7414-3570/external-snapshotter-leaderelection Nov 13 05:25:52.592: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-mock Nov 13 05:25:52.595: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7414 Nov 13 05:25:52.598: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7414 Nov 13 05:25:52.600: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7414 Nov 13 05:25:52.603: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7414 Nov 13 05:25:52.606: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7414 Nov 13 05:25:52.608: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7414 Nov 13 05:25:52.611: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7414 Nov 13 05:25:52.614: INFO: creating *v1.StatefulSet: csi-mock-volumes-7414-3570/csi-mockplugin Nov 13 05:25:52.618: INFO: creating *v1.StatefulSet: csi-mock-volumes-7414-3570/csi-mockplugin-attacher Nov 13 05:25:52.621: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7414 to register on node node1 STEP: Creating pod Nov 13 05:26:02.137: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:26:02.142: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5lgkf] to have phase Bound Nov 13 05:26:02.144: INFO: PersistentVolumeClaim pvc-5lgkf found but phase is Pending instead of Bound. Nov 13 05:26:04.148: INFO: PersistentVolumeClaim pvc-5lgkf found and phase=Bound (2.00606575s) STEP: Deleting the previously created pod Nov 13 05:26:12.171: INFO: Deleting pod "pvc-volume-tester-98w4g" in namespace "csi-mock-volumes-7414" Nov 13 05:26:12.175: INFO: Wait up to 5m0s for pod "pvc-volume-tester-98w4g" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:26:22.193: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ad771d95-dd42-4609-b17f-1adfe985df16/volumes/kubernetes.io~csi/pvc-763c6f07-9553-4d9e-a34e-7bfb0ad7df68/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-98w4g Nov 13 05:26:22.193: INFO: Deleting pod "pvc-volume-tester-98w4g" in namespace "csi-mock-volumes-7414" STEP: Deleting claim pvc-5lgkf Nov 13 05:26:22.202: INFO: Waiting up to 2m0s for PersistentVolume pvc-763c6f07-9553-4d9e-a34e-7bfb0ad7df68 to get deleted Nov 13 05:26:22.204: INFO: PersistentVolume pvc-763c6f07-9553-4d9e-a34e-7bfb0ad7df68 found and phase=Bound (2.011647ms) Nov 13 05:26:24.207: INFO: PersistentVolume pvc-763c6f07-9553-4d9e-a34e-7bfb0ad7df68 was removed STEP: Deleting storageclass csi-mock-volumes-7414-scmxjp9 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7414 STEP: Waiting for namespaces [csi-mock-volumes-7414] to vanish STEP: uninstalling csi mock driver Nov 13 05:26:30.220: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-attacher Nov 13 05:26:30.224: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7414 Nov 13 05:26:30.228: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7414 Nov 13 05:26:30.231: INFO: deleting *v1.Role: csi-mock-volumes-7414-3570/external-attacher-cfg-csi-mock-volumes-7414 Nov 13 05:26:30.235: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-attacher-role-cfg Nov 13 05:26:30.238: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-provisioner Nov 13 05:26:30.242: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7414 Nov 13 05:26:30.245: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7414 Nov 13 05:26:30.248: INFO: deleting *v1.Role: csi-mock-volumes-7414-3570/external-provisioner-cfg-csi-mock-volumes-7414 Nov 13 05:26:30.252: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-provisioner-role-cfg Nov 13 05:26:30.255: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-resizer Nov 13 05:26:30.260: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7414 Nov 13 05:26:30.263: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7414 Nov 13 05:26:30.266: INFO: deleting *v1.Role: csi-mock-volumes-7414-3570/external-resizer-cfg-csi-mock-volumes-7414 Nov 13 05:26:30.270: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7414-3570/csi-resizer-role-cfg Nov 13 05:26:30.274: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-snapshotter Nov 13 05:26:30.277: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7414 Nov 13 05:26:30.281: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7414 Nov 13 05:26:30.284: INFO: deleting *v1.Role: csi-mock-volumes-7414-3570/external-snapshotter-leaderelection-csi-mock-volumes-7414 Nov 13 05:26:30.288: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7414-3570/external-snapshotter-leaderelection Nov 13 05:26:30.291: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7414-3570/csi-mock Nov 13 05:26:30.294: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7414 Nov 13 05:26:30.297: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7414 Nov 13 05:26:30.301: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7414 Nov 13 05:26:30.304: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7414 Nov 13 05:26:30.308: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7414 Nov 13 05:26:30.314: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7414 Nov 13 05:26:30.319: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7414 Nov 13 05:26:30.323: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7414-3570/csi-mockplugin Nov 13 05:26:30.330: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7414-3570/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7414-3570 STEP: Waiting for namespaces [csi-mock-volumes-7414-3570] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:58.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:65.874 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":12,"skipped":642,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:58.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Nov 13 05:26:58.467: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:26:58.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-792" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:30.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-6266 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:26:30.470: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-attacher Nov 13 05:26:30.473: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6266 Nov 13 05:26:30.473: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6266 Nov 13 05:26:30.475: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6266 Nov 13 05:26:30.478: INFO: creating *v1.Role: csi-mock-volumes-6266-9147/external-attacher-cfg-csi-mock-volumes-6266 Nov 13 05:26:30.481: INFO: creating *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-attacher-role-cfg Nov 13 05:26:30.484: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-provisioner Nov 13 05:26:30.487: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6266 Nov 13 05:26:30.487: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6266 Nov 13 05:26:30.490: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6266 Nov 13 05:26:30.493: INFO: creating *v1.Role: csi-mock-volumes-6266-9147/external-provisioner-cfg-csi-mock-volumes-6266 Nov 13 05:26:30.496: INFO: creating *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-provisioner-role-cfg Nov 13 05:26:30.498: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-resizer Nov 13 05:26:30.501: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6266 Nov 13 05:26:30.501: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6266 Nov 13 05:26:30.504: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6266 Nov 13 05:26:30.506: INFO: creating *v1.Role: csi-mock-volumes-6266-9147/external-resizer-cfg-csi-mock-volumes-6266 Nov 13 05:26:30.509: INFO: creating *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-resizer-role-cfg Nov 13 05:26:30.513: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-snapshotter Nov 13 05:26:30.516: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6266 Nov 13 05:26:30.516: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6266 Nov 13 05:26:30.518: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6266 Nov 13 05:26:30.522: INFO: creating *v1.Role: csi-mock-volumes-6266-9147/external-snapshotter-leaderelection-csi-mock-volumes-6266 Nov 13 05:26:30.525: INFO: creating *v1.RoleBinding: csi-mock-volumes-6266-9147/external-snapshotter-leaderelection Nov 13 05:26:30.527: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-mock Nov 13 05:26:30.529: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6266 Nov 13 05:26:30.532: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6266 Nov 13 05:26:30.534: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6266 Nov 13 05:26:30.536: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6266 Nov 13 05:26:30.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6266 Nov 13 05:26:30.541: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6266 Nov 13 05:26:30.544: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6266 Nov 13 05:26:30.547: INFO: creating *v1.StatefulSet: csi-mock-volumes-6266-9147/csi-mockplugin Nov 13 05:26:30.551: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6266 Nov 13 05:26:30.555: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6266" Nov 13 05:26:30.557: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6266 to register on node node2 STEP: Creating pod Nov 13 05:26:35.572: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:26:35.577: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9v99l] to have phase Bound Nov 13 05:26:35.581: INFO: PersistentVolumeClaim pvc-9v99l found but phase is Pending instead of Bound. Nov 13 05:26:37.584: INFO: PersistentVolumeClaim pvc-9v99l found and phase=Bound (2.007465548s) Nov 13 05:26:41.607: INFO: Deleting pod "pvc-volume-tester-h2k6b" in namespace "csi-mock-volumes-6266" Nov 13 05:26:41.611: INFO: Wait up to 5m0s for pod "pvc-volume-tester-h2k6b" to be fully deleted STEP: Checking PVC events Nov 13 05:26:52.652: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186167", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a9bcc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a9bce0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002dc27d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002dc27e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:26:52.652: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186168", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6266"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0044d2a98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0044d2ab0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0044d2ac8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0044d2ae0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000d0acb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000d0acd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:26:52.653: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186176", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6266"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045827c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045827e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045827f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582810)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5d352413-501d-4c1c-9080-ee1edc18b6df", StorageClassName:(*string)(0xc002dc3070), VolumeMode:(*v1.PersistentVolumeMode)(0xc002dc3080), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:26:52.653: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186177", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6266"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582840), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582858)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582870), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582888)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5d352413-501d-4c1c-9080-ee1edc18b6df", StorageClassName:(*string)(0xc002dc30b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002dc30c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:26:52.653: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186278", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc0045828b8), DeletionGracePeriodSeconds:(*int64)(0xc004eb2e78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6266"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045828d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045828e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582918)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5d352413-501d-4c1c-9080-ee1edc18b6df", StorageClassName:(*string)(0xc002dc3100), VolumeMode:(*v1.PersistentVolumeMode)(0xc002dc3110), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:26:52.653: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9v99l", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6266", SelfLink:"", UID:"5d352413-501d-4c1c-9080-ee1edc18b6df", ResourceVersion:"186279", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772377995, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004582948), DeletionGracePeriodSeconds:(*int64)(0xc004eb30c8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6266"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004582978)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004582990), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045829a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-5d352413-501d-4c1c-9080-ee1edc18b6df", StorageClassName:(*string)(0xc002dc3170), VolumeMode:(*v1.PersistentVolumeMode)(0xc002dc3180), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-h2k6b Nov 13 05:26:52.653: INFO: Deleting pod "pvc-volume-tester-h2k6b" in namespace "csi-mock-volumes-6266" STEP: Deleting claim pvc-9v99l STEP: Deleting storageclass csi-mock-volumes-6266-scrt5xm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6266 STEP: Waiting for namespaces [csi-mock-volumes-6266] to vanish STEP: uninstalling csi mock driver Nov 13 05:26:58.672: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-attacher Nov 13 05:26:58.676: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6266 Nov 13 05:26:58.679: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6266 Nov 13 05:26:58.683: INFO: deleting *v1.Role: csi-mock-volumes-6266-9147/external-attacher-cfg-csi-mock-volumes-6266 Nov 13 05:26:58.687: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-attacher-role-cfg Nov 13 05:26:58.691: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-provisioner Nov 13 05:26:58.694: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6266 Nov 13 05:26:58.697: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6266 Nov 13 05:26:58.700: INFO: deleting *v1.Role: csi-mock-volumes-6266-9147/external-provisioner-cfg-csi-mock-volumes-6266 Nov 13 05:26:58.703: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-provisioner-role-cfg Nov 13 05:26:58.706: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-resizer Nov 13 05:26:58.710: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6266 Nov 13 05:26:58.713: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6266 Nov 13 05:26:58.716: INFO: deleting *v1.Role: csi-mock-volumes-6266-9147/external-resizer-cfg-csi-mock-volumes-6266 Nov 13 05:26:58.719: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6266-9147/csi-resizer-role-cfg Nov 13 05:26:58.722: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-snapshotter Nov 13 05:26:58.725: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6266 Nov 13 05:26:58.728: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6266 Nov 13 05:26:58.732: INFO: deleting *v1.Role: csi-mock-volumes-6266-9147/external-snapshotter-leaderelection-csi-mock-volumes-6266 Nov 13 05:26:58.735: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6266-9147/external-snapshotter-leaderelection Nov 13 05:26:58.739: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6266-9147/csi-mock Nov 13 05:26:58.741: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6266 Nov 13 05:26:58.745: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6266 Nov 13 05:26:58.748: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6266 Nov 13 05:26:58.751: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6266 Nov 13 05:26:58.753: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6266 Nov 13 05:26:58.757: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6266 Nov 13 05:26:58.760: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6266 Nov 13 05:26:58.763: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6266-9147/csi-mockplugin Nov 13 05:26:58.766: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6266 STEP: deleting the driver namespace: csi-mock-volumes-6266-9147 STEP: Waiting for namespaces [csi-mock-volumes-6266-9147] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:04.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:34.372 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:49.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3" Nov 13 05:26:51.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3 && dd if=/dev/zero of=/tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3/file] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-fd4ww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:51.198: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:26:51.313: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-fd4ww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:26:51.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:26:51.439: INFO: Creating a PV followed by a PVC Nov 13 05:26:51.445: INFO: Waiting for PV local-pv598hq to bind to PVC pvc-h9wqk Nov 13 05:26:51.445: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h9wqk] to have phase Bound Nov 13 05:26:51.447: INFO: PersistentVolumeClaim pvc-h9wqk found but phase is Pending instead of Bound. Nov 13 05:26:53.452: INFO: PersistentVolumeClaim pvc-h9wqk found but phase is Pending instead of Bound. Nov 13 05:26:55.456: INFO: PersistentVolumeClaim pvc-h9wqk found but phase is Pending instead of Bound. Nov 13 05:26:57.462: INFO: PersistentVolumeClaim pvc-h9wqk found and phase=Bound (6.016786074s) Nov 13 05:26:57.462: INFO: Waiting up to 3m0s for PersistentVolume local-pv598hq to have phase Bound Nov 13 05:26:57.464: INFO: PersistentVolume local-pv598hq found and phase=Bound (2.718755ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:27:03.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9388 exec pod-167525d7-c48f-4c60-804f-66e6138c2ab0 --namespace=persistent-local-volumes-test-9388 -- stat -c %g /mnt/volume1' Nov 13 05:27:03.727: INFO: stderr: "" Nov 13 05:27:03.727: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:27:07.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9388 exec pod-10bc31ff-877f-4097-80ba-c7a3ba392e9d --namespace=persistent-local-volumes-test-9388 -- stat -c %g /mnt/volume1' Nov 13 05:27:08.010: INFO: stderr: "" Nov 13 05:27:08.010: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-167525d7-c48f-4c60-804f-66e6138c2ab0 in namespace persistent-local-volumes-test-9388 STEP: Deleting second pod STEP: Deleting pod pod-10bc31ff-877f-4097-80ba-c7a3ba392e9d in namespace persistent-local-volumes-test-9388 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:27:08.018: INFO: Deleting PersistentVolumeClaim "pvc-h9wqk" Nov 13 05:27:08.022: INFO: Deleting PersistentVolume "local-pv598hq" Nov 13 05:27:08.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-fd4ww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:27:08.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3/file Nov 13 05:27:08.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-fd4ww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:27:08.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3 Nov 13 05:27:08.203: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c448a851-6b9d-4374-8ee8-08d49a5d06d3] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-fd4ww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:27:08.203: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:08.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9388" for this suite. • [SLOW TEST:19.176 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":288,"failed":0} [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:08.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:27:08.378: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:08.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9689" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.057 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:08.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:27:10.482: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5bd0b1b9-3ad3-4a91-a173-3eac065acd40] Namespace:persistent-local-volumes-test-2991 PodName:hostexec-node2-ms6vn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:27:10.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:27:10.598: INFO: Creating a PV followed by a PVC Nov 13 05:27:10.605: INFO: Waiting for PV local-pvdqfb6 to bind to PVC pvc-7xbg9 Nov 13 05:27:10.605: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7xbg9] to have phase Bound Nov 13 05:27:10.607: INFO: PersistentVolumeClaim pvc-7xbg9 found but phase is Pending instead of Bound. Nov 13 05:27:12.611: INFO: PersistentVolumeClaim pvc-7xbg9 found and phase=Bound (2.005871165s) Nov 13 05:27:12.611: INFO: Waiting up to 3m0s for PersistentVolume local-pvdqfb6 to have phase Bound Nov 13 05:27:12.614: INFO: PersistentVolume local-pvdqfb6 found and phase=Bound (3.497406ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:27:16.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2991 exec pod-ded2e9b1-86ba-417b-8573-01941594efd6 --namespace=persistent-local-volumes-test-2991 -- stat -c %g /mnt/volume1' Nov 13 05:27:16.885: INFO: stderr: "" Nov 13 05:27:16.885: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-ded2e9b1-86ba-417b-8573-01941594efd6 in namespace persistent-local-volumes-test-2991 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:27:16.891: INFO: Deleting PersistentVolumeClaim "pvc-7xbg9" Nov 13 05:27:16.895: INFO: Deleting PersistentVolume "local-pvdqfb6" STEP: Removing the test directory Nov 13 05:27:16.900: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5bd0b1b9-3ad3-4a91-a173-3eac065acd40] Namespace:persistent-local-volumes-test-2991 PodName:hostexec-node2-ms6vn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:27:16.900: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:16.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2991" for this suite. • [SLOW TEST:8.567 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":7,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:17.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Nov 13 05:27:17.075: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:17.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2475" for this suite. S [SKIPPING] [0.037 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:563 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:14.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-3528 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:26:14.161: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-attacher Nov 13 05:26:14.164: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3528 Nov 13 05:26:14.164: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3528 Nov 13 05:26:14.167: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3528 Nov 13 05:26:14.169: INFO: creating *v1.Role: csi-mock-volumes-3528-3238/external-attacher-cfg-csi-mock-volumes-3528 Nov 13 05:26:14.172: INFO: creating *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-attacher-role-cfg Nov 13 05:26:14.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-provisioner Nov 13 05:26:14.177: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3528 Nov 13 05:26:14.177: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3528 Nov 13 05:26:14.180: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3528 Nov 13 05:26:14.183: INFO: creating *v1.Role: csi-mock-volumes-3528-3238/external-provisioner-cfg-csi-mock-volumes-3528 Nov 13 05:26:14.185: INFO: creating *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-provisioner-role-cfg Nov 13 05:26:14.188: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-resizer Nov 13 05:26:14.191: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3528 Nov 13 05:26:14.191: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3528 Nov 13 05:26:14.193: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3528 Nov 13 05:26:14.196: INFO: creating *v1.Role: csi-mock-volumes-3528-3238/external-resizer-cfg-csi-mock-volumes-3528 Nov 13 05:26:14.199: INFO: creating *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-resizer-role-cfg Nov 13 05:26:14.201: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-snapshotter Nov 13 05:26:14.204: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3528 Nov 13 05:26:14.204: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3528 Nov 13 05:26:14.206: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3528 Nov 13 05:26:14.208: INFO: creating *v1.Role: csi-mock-volumes-3528-3238/external-snapshotter-leaderelection-csi-mock-volumes-3528 Nov 13 05:26:14.211: INFO: creating *v1.RoleBinding: csi-mock-volumes-3528-3238/external-snapshotter-leaderelection Nov 13 05:26:14.213: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-mock Nov 13 05:26:14.216: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3528 Nov 13 05:26:14.218: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3528 Nov 13 05:26:14.222: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3528 Nov 13 05:26:14.225: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3528 Nov 13 05:26:14.228: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3528 Nov 13 05:26:14.230: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3528 Nov 13 05:26:14.233: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3528 Nov 13 05:26:14.236: INFO: creating *v1.StatefulSet: csi-mock-volumes-3528-3238/csi-mockplugin Nov 13 05:26:14.240: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3528 Nov 13 05:26:14.244: INFO: creating *v1.StatefulSet: csi-mock-volumes-3528-3238/csi-mockplugin-attacher Nov 13 05:26:14.247: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3528" Nov 13 05:26:14.249: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3528 to register on node node2 STEP: Creating pod Nov 13 05:26:23.767: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:26:23.771: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cbhq7] to have phase Bound Nov 13 05:26:23.773: INFO: PersistentVolumeClaim pvc-cbhq7 found but phase is Pending instead of Bound. Nov 13 05:26:25.776: INFO: PersistentVolumeClaim pvc-cbhq7 found and phase=Bound (2.005669408s) STEP: Deleting the previously created pod Nov 13 05:26:45.798: INFO: Deleting pod "pvc-volume-tester-658ls" in namespace "csi-mock-volumes-3528" Nov 13 05:26:45.804: INFO: Wait up to 5m0s for pod "pvc-volume-tester-658ls" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:26:51.816: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1db5482f-c266-4363-a3f0-7ce15d17629f/volumes/kubernetes.io~csi/pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-658ls Nov 13 05:26:51.816: INFO: Deleting pod "pvc-volume-tester-658ls" in namespace "csi-mock-volumes-3528" STEP: Deleting claim pvc-cbhq7 Nov 13 05:26:51.825: INFO: Waiting up to 2m0s for PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 to get deleted Nov 13 05:26:51.827: INFO: PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 found and phase=Bound (2.015549ms) Nov 13 05:26:53.831: INFO: PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 found and phase=Released (2.005699842s) Nov 13 05:26:55.835: INFO: PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 found and phase=Released (4.009229535s) Nov 13 05:26:57.839: INFO: PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 found and phase=Released (6.01385573s) Nov 13 05:26:59.842: INFO: PersistentVolume pvc-bf51f0da-cb5b-4e7f-b772-39f38bd742d0 was removed STEP: Deleting storageclass csi-mock-volumes-3528-sc2lhbz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3528 STEP: Waiting for namespaces [csi-mock-volumes-3528] to vanish STEP: uninstalling csi mock driver Nov 13 05:27:05.855: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-attacher Nov 13 05:27:05.860: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3528 Nov 13 05:27:05.863: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3528 Nov 13 05:27:05.867: INFO: deleting *v1.Role: csi-mock-volumes-3528-3238/external-attacher-cfg-csi-mock-volumes-3528 Nov 13 05:27:05.871: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-attacher-role-cfg Nov 13 05:27:05.874: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-provisioner Nov 13 05:27:05.877: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3528 Nov 13 05:27:05.881: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3528 Nov 13 05:27:05.884: INFO: deleting *v1.Role: csi-mock-volumes-3528-3238/external-provisioner-cfg-csi-mock-volumes-3528 Nov 13 05:27:05.891: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-provisioner-role-cfg Nov 13 05:27:05.899: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-resizer Nov 13 05:27:05.907: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3528 Nov 13 05:27:05.914: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3528 Nov 13 05:27:05.918: INFO: deleting *v1.Role: csi-mock-volumes-3528-3238/external-resizer-cfg-csi-mock-volumes-3528 Nov 13 05:27:05.921: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3528-3238/csi-resizer-role-cfg Nov 13 05:27:05.924: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-snapshotter Nov 13 05:27:05.928: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3528 Nov 13 05:27:05.932: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3528 Nov 13 05:27:05.936: INFO: deleting *v1.Role: csi-mock-volumes-3528-3238/external-snapshotter-leaderelection-csi-mock-volumes-3528 Nov 13 05:27:05.940: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3528-3238/external-snapshotter-leaderelection Nov 13 05:27:05.943: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3528-3238/csi-mock Nov 13 05:27:05.946: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3528 Nov 13 05:27:05.950: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3528 Nov 13 05:27:05.954: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3528 Nov 13 05:27:05.958: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3528 Nov 13 05:27:05.962: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3528 Nov 13 05:27:05.966: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3528 Nov 13 05:27:05.969: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3528 Nov 13 05:27:05.972: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3528-3238/csi-mockplugin Nov 13 05:27:05.977: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3528 Nov 13 05:27:05.980: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3528-3238/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3528-3238 STEP: Waiting for namespaces [csi-mock-volumes-3528-3238] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:17.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.901 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":12,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:17.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Nov 13 05:27:17.134: INFO: Waiting up to 5m0s for pod "metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483" in namespace "projected-2108" to be "Succeeded or Failed" Nov 13 05:27:17.136: INFO: Pod "metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366116ms Nov 13 05:27:19.139: INFO: Pod "metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005153505s Nov 13 05:27:21.143: INFO: Pod "metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009342867s STEP: Saw pod success Nov 13 05:27:21.143: INFO: Pod "metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483" satisfied condition "Succeeded or Failed" Nov 13 05:27:21.145: INFO: Trying to get logs from node node2 pod metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483 container client-container: STEP: delete the pod Nov 13 05:27:21.158: INFO: Waiting for pod metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483 to disappear Nov 13 05:27:21.160: INFO: Pod metadata-volume-406b7d59-1abc-4e2d-a757-78a6ecb5e483 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:21.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2108" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:21.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Nov 13 05:27:21.311: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8384" to be "Succeeded or Failed" Nov 13 05:27:21.314: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772206ms Nov 13 05:27:23.318: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006916935s Nov 13 05:27:25.321: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010406106s STEP: Saw pod success Nov 13 05:27:25.321: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:27:25.324: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 13 05:27:25.376: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:27:25.378: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:25.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8384" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":9,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:18.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:27:18.084: INFO: The status of Pod test-hostpath-type-tqhqr is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:27:20.087: INFO: The status of Pod test-hostpath-type-tqhqr is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:27:22.087: INFO: The status of Pod test-hostpath-type-tqhqr is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:28.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8557" for this suite. • [SLOW TEST:10.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":13,"skipped":449,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:28.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:27:28.216: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:28.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9350" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":10,"skipped":380,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:04.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-48 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:27:04.855: INFO: creating *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-attacher Nov 13 05:27:04.859: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-48 Nov 13 05:27:04.859: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-48 Nov 13 05:27:04.862: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-48 Nov 13 05:27:04.864: INFO: creating *v1.Role: csi-mock-volumes-48-3548/external-attacher-cfg-csi-mock-volumes-48 Nov 13 05:27:04.867: INFO: creating *v1.RoleBinding: csi-mock-volumes-48-3548/csi-attacher-role-cfg Nov 13 05:27:04.870: INFO: creating *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-provisioner Nov 13 05:27:04.872: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-48 Nov 13 05:27:04.872: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-48 Nov 13 05:27:04.875: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-48 Nov 13 05:27:04.877: INFO: creating *v1.Role: csi-mock-volumes-48-3548/external-provisioner-cfg-csi-mock-volumes-48 Nov 13 05:27:04.880: INFO: creating *v1.RoleBinding: csi-mock-volumes-48-3548/csi-provisioner-role-cfg Nov 13 05:27:04.882: INFO: creating *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-resizer Nov 13 05:27:04.884: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-48 Nov 13 05:27:04.885: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-48 Nov 13 05:27:04.887: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-48 Nov 13 05:27:04.889: INFO: creating *v1.Role: csi-mock-volumes-48-3548/external-resizer-cfg-csi-mock-volumes-48 Nov 13 05:27:04.891: INFO: creating *v1.RoleBinding: csi-mock-volumes-48-3548/csi-resizer-role-cfg Nov 13 05:27:04.894: INFO: creating *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-snapshotter Nov 13 05:27:04.896: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-48 Nov 13 05:27:04.896: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-48 Nov 13 05:27:04.899: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-48 Nov 13 05:27:04.901: INFO: creating *v1.Role: csi-mock-volumes-48-3548/external-snapshotter-leaderelection-csi-mock-volumes-48 Nov 13 05:27:04.905: INFO: creating *v1.RoleBinding: csi-mock-volumes-48-3548/external-snapshotter-leaderelection Nov 13 05:27:04.908: INFO: creating *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-mock Nov 13 05:27:04.910: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-48 Nov 13 05:27:04.913: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-48 Nov 13 05:27:04.915: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-48 Nov 13 05:27:04.917: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-48 Nov 13 05:27:04.919: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-48 Nov 13 05:27:04.922: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-48 Nov 13 05:27:04.924: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-48 Nov 13 05:27:04.926: INFO: creating *v1.StatefulSet: csi-mock-volumes-48-3548/csi-mockplugin Nov 13 05:27:04.930: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-48 Nov 13 05:27:04.933: INFO: creating *v1.StatefulSet: csi-mock-volumes-48-3548/csi-mockplugin-attacher Nov 13 05:27:04.936: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-48" Nov 13 05:27:04.939: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-48 to register on node node1 STEP: Creating pod Nov 13 05:27:14.456: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:27:14.462: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mvdkv] to have phase Bound Nov 13 05:27:14.465: INFO: PersistentVolumeClaim pvc-mvdkv found but phase is Pending instead of Bound. Nov 13 05:27:16.469: INFO: PersistentVolumeClaim pvc-mvdkv found and phase=Bound (2.007142349s) STEP: Deleting the previously created pod Nov 13 05:27:22.492: INFO: Deleting pod "pvc-volume-tester-dbks6" in namespace "csi-mock-volumes-48" Nov 13 05:27:22.497: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dbks6" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:27:32.529: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3e22451a-d070-40d5-8c85-13aa749d26e6/volumes/kubernetes.io~csi/pvc-3a907551-fb31-4850-90f4-9a2d28f369ea/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-dbks6 Nov 13 05:27:32.529: INFO: Deleting pod "pvc-volume-tester-dbks6" in namespace "csi-mock-volumes-48" STEP: Deleting claim pvc-mvdkv Nov 13 05:27:32.540: INFO: Waiting up to 2m0s for PersistentVolume pvc-3a907551-fb31-4850-90f4-9a2d28f369ea to get deleted Nov 13 05:27:32.542: INFO: PersistentVolume pvc-3a907551-fb31-4850-90f4-9a2d28f369ea found and phase=Bound (1.82184ms) Nov 13 05:27:34.544: INFO: PersistentVolume pvc-3a907551-fb31-4850-90f4-9a2d28f369ea was removed STEP: Deleting storageclass csi-mock-volumes-48-sc9zfdz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-48 STEP: Waiting for namespaces [csi-mock-volumes-48] to vanish STEP: uninstalling csi mock driver Nov 13 05:27:40.556: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-attacher Nov 13 05:27:40.561: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-48 Nov 13 05:27:40.564: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-48 Nov 13 05:27:40.568: INFO: deleting *v1.Role: csi-mock-volumes-48-3548/external-attacher-cfg-csi-mock-volumes-48 Nov 13 05:27:40.572: INFO: deleting *v1.RoleBinding: csi-mock-volumes-48-3548/csi-attacher-role-cfg Nov 13 05:27:40.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-provisioner Nov 13 05:27:40.579: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-48 Nov 13 05:27:40.582: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-48 Nov 13 05:27:40.586: INFO: deleting *v1.Role: csi-mock-volumes-48-3548/external-provisioner-cfg-csi-mock-volumes-48 Nov 13 05:27:40.589: INFO: deleting *v1.RoleBinding: csi-mock-volumes-48-3548/csi-provisioner-role-cfg Nov 13 05:27:40.592: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-resizer Nov 13 05:27:40.595: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-48 Nov 13 05:27:40.598: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-48 Nov 13 05:27:40.603: INFO: deleting *v1.Role: csi-mock-volumes-48-3548/external-resizer-cfg-csi-mock-volumes-48 Nov 13 05:27:40.609: INFO: deleting *v1.RoleBinding: csi-mock-volumes-48-3548/csi-resizer-role-cfg Nov 13 05:27:40.613: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-snapshotter Nov 13 05:27:40.619: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-48 Nov 13 05:27:40.626: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-48 Nov 13 05:27:40.631: INFO: deleting *v1.Role: csi-mock-volumes-48-3548/external-snapshotter-leaderelection-csi-mock-volumes-48 Nov 13 05:27:40.635: INFO: deleting *v1.RoleBinding: csi-mock-volumes-48-3548/external-snapshotter-leaderelection Nov 13 05:27:40.638: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-48-3548/csi-mock Nov 13 05:27:40.641: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-48 Nov 13 05:27:40.644: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-48 Nov 13 05:27:40.647: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-48 Nov 13 05:27:40.650: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-48 Nov 13 05:27:40.653: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-48 Nov 13 05:27:40.656: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-48 Nov 13 05:27:40.660: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-48 Nov 13 05:27:40.663: INFO: deleting *v1.StatefulSet: csi-mock-volumes-48-3548/csi-mockplugin Nov 13 05:27:40.667: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-48 Nov 13 05:27:40.670: INFO: deleting *v1.StatefulSet: csi-mock-volumes-48-3548/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-48-3548 STEP: Waiting for namespaces [csi-mock-volumes-48-3548] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:46.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:41.897 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":11,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:46.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:27:46.768: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:27:46.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-3082" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:47.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-3214 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:25:47.561: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-attacher Nov 13 05:25:47.564: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3214 Nov 13 05:25:47.564: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3214 Nov 13 05:25:47.567: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3214 Nov 13 05:25:47.570: INFO: creating *v1.Role: csi-mock-volumes-3214-9170/external-attacher-cfg-csi-mock-volumes-3214 Nov 13 05:25:47.573: INFO: creating *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-attacher-role-cfg Nov 13 05:25:47.576: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-provisioner Nov 13 05:25:47.579: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3214 Nov 13 05:25:47.579: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3214 Nov 13 05:25:47.582: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3214 Nov 13 05:25:47.584: INFO: creating *v1.Role: csi-mock-volumes-3214-9170/external-provisioner-cfg-csi-mock-volumes-3214 Nov 13 05:25:47.587: INFO: creating *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-provisioner-role-cfg Nov 13 05:25:47.590: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-resizer Nov 13 05:25:47.593: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3214 Nov 13 05:25:47.593: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3214 Nov 13 05:25:47.596: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3214 Nov 13 05:25:47.599: INFO: creating *v1.Role: csi-mock-volumes-3214-9170/external-resizer-cfg-csi-mock-volumes-3214 Nov 13 05:25:47.602: INFO: creating *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-resizer-role-cfg Nov 13 05:25:47.605: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-snapshotter Nov 13 05:25:47.608: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3214 Nov 13 05:25:47.608: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3214 Nov 13 05:25:47.611: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3214 Nov 13 05:25:47.615: INFO: creating *v1.Role: csi-mock-volumes-3214-9170/external-snapshotter-leaderelection-csi-mock-volumes-3214 Nov 13 05:25:47.617: INFO: creating *v1.RoleBinding: csi-mock-volumes-3214-9170/external-snapshotter-leaderelection Nov 13 05:25:47.622: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-mock Nov 13 05:25:47.625: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3214 Nov 13 05:25:47.628: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3214 Nov 13 05:25:47.630: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3214 Nov 13 05:25:47.633: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3214 Nov 13 05:25:47.635: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3214 Nov 13 05:25:47.638: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3214 Nov 13 05:25:47.641: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3214 Nov 13 05:25:47.643: INFO: creating *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin Nov 13 05:25:47.648: INFO: creating *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin-attacher Nov 13 05:25:47.651: INFO: creating *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin-resizer Nov 13 05:25:47.655: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3214 to register on node node2 STEP: Creating pod Nov 13 05:25:57.174: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:25:57.179: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gm7dv] to have phase Bound Nov 13 05:25:57.181: INFO: PersistentVolumeClaim pvc-gm7dv found but phase is Pending instead of Bound. Nov 13 05:25:59.184: INFO: PersistentVolumeClaim pvc-gm7dv found and phase=Bound (2.005172874s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-snpdk Nov 13 05:27:25.228: INFO: Deleting pod "pvc-volume-tester-snpdk" in namespace "csi-mock-volumes-3214" Nov 13 05:27:25.233: INFO: Wait up to 5m0s for pod "pvc-volume-tester-snpdk" to be fully deleted STEP: Deleting claim pvc-gm7dv Nov 13 05:27:43.246: INFO: Waiting up to 2m0s for PersistentVolume pvc-2d5727b9-78ed-4306-80d0-f92ca7a7059d to get deleted Nov 13 05:27:43.248: INFO: PersistentVolume pvc-2d5727b9-78ed-4306-80d0-f92ca7a7059d found and phase=Bound (1.833592ms) Nov 13 05:27:45.255: INFO: PersistentVolume pvc-2d5727b9-78ed-4306-80d0-f92ca7a7059d was removed STEP: Deleting storageclass csi-mock-volumes-3214-sc64dnn STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3214 STEP: Waiting for namespaces [csi-mock-volumes-3214] to vanish STEP: uninstalling csi mock driver Nov 13 05:27:51.266: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-attacher Nov 13 05:27:51.272: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3214 Nov 13 05:27:51.277: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3214 Nov 13 05:27:51.280: INFO: deleting *v1.Role: csi-mock-volumes-3214-9170/external-attacher-cfg-csi-mock-volumes-3214 Nov 13 05:27:51.283: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-attacher-role-cfg Nov 13 05:27:51.286: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-provisioner Nov 13 05:27:51.290: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3214 Nov 13 05:27:51.293: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3214 Nov 13 05:27:51.296: INFO: deleting *v1.Role: csi-mock-volumes-3214-9170/external-provisioner-cfg-csi-mock-volumes-3214 Nov 13 05:27:51.300: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-provisioner-role-cfg Nov 13 05:27:51.304: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-resizer Nov 13 05:27:51.308: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3214 Nov 13 05:27:51.315: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3214 Nov 13 05:27:51.324: INFO: deleting *v1.Role: csi-mock-volumes-3214-9170/external-resizer-cfg-csi-mock-volumes-3214 Nov 13 05:27:51.330: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3214-9170/csi-resizer-role-cfg Nov 13 05:27:51.334: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-snapshotter Nov 13 05:27:51.338: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3214 Nov 13 05:27:51.342: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3214 Nov 13 05:27:51.345: INFO: deleting *v1.Role: csi-mock-volumes-3214-9170/external-snapshotter-leaderelection-csi-mock-volumes-3214 Nov 13 05:27:51.348: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3214-9170/external-snapshotter-leaderelection Nov 13 05:27:51.353: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3214-9170/csi-mock Nov 13 05:27:51.356: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3214 Nov 13 05:27:51.360: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3214 Nov 13 05:27:51.363: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3214 Nov 13 05:27:51.366: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3214 Nov 13 05:27:51.370: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3214 Nov 13 05:27:51.373: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3214 Nov 13 05:27:51.376: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3214 Nov 13 05:27:51.379: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin Nov 13 05:27:51.384: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin-attacher Nov 13 05:27:51.387: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3214-9170/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3214-9170 STEP: Waiting for namespaces [csi-mock-volumes-3214-9170] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:03.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:135.903 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":13,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:03.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-49233e0a-fa98-44f0-bfd5-25cdf321a7e0 STEP: Creating a pod to test consume configMaps Nov 13 05:28:03.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3" in namespace "configmap-8595" to be "Succeeded or Failed" Nov 13 05:28:03.536: INFO: Pod "pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192134ms Nov 13 05:28:05.540: INFO: Pod "pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006010966s Nov 13 05:28:07.545: INFO: Pod "pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01091205s STEP: Saw pod success Nov 13 05:28:07.545: INFO: Pod "pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3" satisfied condition "Succeeded or Failed" Nov 13 05:28:07.549: INFO: Trying to get logs from node node2 pod pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3 container agnhost-container: STEP: delete the pod Nov 13 05:28:07.574: INFO: Waiting for pod pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3 to disappear Nov 13 05:28:07.575: INFO: Pod pod-configmaps-92a894ec-a446-4b8d-8b45-9ab81f6fbdf3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:07.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8595" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":439,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:26:58.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-5976 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:26:58.548: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-attacher Nov 13 05:26:58.551: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5976 Nov 13 05:26:58.551: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5976 Nov 13 05:26:58.554: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5976 Nov 13 05:26:58.556: INFO: creating *v1.Role: csi-mock-volumes-5976-2733/external-attacher-cfg-csi-mock-volumes-5976 Nov 13 05:26:58.559: INFO: creating *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-attacher-role-cfg Nov 13 05:26:58.561: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-provisioner Nov 13 05:26:58.563: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5976 Nov 13 05:26:58.563: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5976 Nov 13 05:26:58.566: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5976 Nov 13 05:26:58.568: INFO: creating *v1.Role: csi-mock-volumes-5976-2733/external-provisioner-cfg-csi-mock-volumes-5976 Nov 13 05:26:58.571: INFO: creating *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-provisioner-role-cfg Nov 13 05:26:58.574: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-resizer Nov 13 05:26:58.576: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5976 Nov 13 05:26:58.576: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5976 Nov 13 05:26:58.579: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5976 Nov 13 05:26:58.581: INFO: creating *v1.Role: csi-mock-volumes-5976-2733/external-resizer-cfg-csi-mock-volumes-5976 Nov 13 05:26:58.584: INFO: creating *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-resizer-role-cfg Nov 13 05:26:58.586: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-snapshotter Nov 13 05:26:58.589: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5976 Nov 13 05:26:58.589: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5976 Nov 13 05:26:58.592: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5976 Nov 13 05:26:58.594: INFO: creating *v1.Role: csi-mock-volumes-5976-2733/external-snapshotter-leaderelection-csi-mock-volumes-5976 Nov 13 05:26:58.597: INFO: creating *v1.RoleBinding: csi-mock-volumes-5976-2733/external-snapshotter-leaderelection Nov 13 05:26:58.599: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-mock Nov 13 05:26:58.601: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5976 Nov 13 05:26:58.603: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5976 Nov 13 05:26:58.606: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5976 Nov 13 05:26:58.608: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5976 Nov 13 05:26:58.611: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5976 Nov 13 05:26:58.613: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5976 Nov 13 05:26:58.615: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5976 Nov 13 05:26:58.618: INFO: creating *v1.StatefulSet: csi-mock-volumes-5976-2733/csi-mockplugin Nov 13 05:26:58.622: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5976 Nov 13 05:26:58.625: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5976" Nov 13 05:26:58.627: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5976 to register on node node1 I1113 05:27:03.702396 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5976","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:27:03.803716 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:27:03.805746 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5976","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:27:03.846377 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:27:03.888281 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:27:03.995149 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5976"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:27:08.144: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:27:08.148: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jzp8t] to have phase Bound Nov 13 05:27:08.150: INFO: PersistentVolumeClaim pvc-jzp8t found but phase is Pending instead of Bound. I1113 05:27:08.157198 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc"}}},"Error":"","FullError":null} Nov 13 05:27:10.153: INFO: PersistentVolumeClaim pvc-jzp8t found and phase=Bound (2.005253089s) Nov 13 05:27:10.166: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jzp8t] to have phase Bound Nov 13 05:27:10.169: INFO: PersistentVolumeClaim pvc-jzp8t found and phase=Bound (2.078808ms) STEP: Waiting for expected CSI calls I1113 05:27:10.416711 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:10.419643 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","storage.kubernetes.io/csiProvisionerIdentity":"1636781223926-8081-csi-mock-csi-mock-volumes-5976"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:27:11.020878 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:11.022688 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","storage.kubernetes.io/csiProvisionerIdentity":"1636781223926-8081-csi-mock-csi-mock-volumes-5976"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Nov 13 05:27:11.169: INFO: Deleting pod "pvc-volume-tester-nlp2h" in namespace "csi-mock-volumes-5976" Nov 13 05:27:11.174: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nlp2h" to be fully deleted I1113 05:27:12.029835 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:12.031670 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","storage.kubernetes.io/csiProvisionerIdentity":"1636781223926-8081-csi-mock-csi-mock-volumes-5976"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:27:14.043672 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:14.045421 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","storage.kubernetes.io/csiProvisionerIdentity":"1636781223926-8081-csi-mock-csi-mock-volumes-5976"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:27:18.077959 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:18.080340 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc","storage.kubernetes.io/csiProvisionerIdentity":"1636781223926-8081-csi-mock-csi-mock-volumes-5976"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:27:21.513209 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:21.514966 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-nlp2h Nov 13 05:27:24.179: INFO: Deleting pod "pvc-volume-tester-nlp2h" in namespace "csi-mock-volumes-5976" STEP: Deleting claim pvc-jzp8t Nov 13 05:27:24.188: INFO: Waiting up to 2m0s for PersistentVolume pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc to get deleted Nov 13 05:27:24.190: INFO: PersistentVolume pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc found and phase=Bound (1.707722ms) I1113 05:27:24.204781 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:27:26.193: INFO: PersistentVolume pvc-f8699ec3-8ba1-4d3d-9a55-4d3478eae5fc was removed STEP: Deleting storageclass csi-mock-volumes-5976-scqrz9d STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5976 STEP: Waiting for namespaces [csi-mock-volumes-5976] to vanish STEP: uninstalling csi mock driver Nov 13 05:27:32.541: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-attacher Nov 13 05:27:32.544: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5976 Nov 13 05:27:32.548: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5976 Nov 13 05:27:32.551: INFO: deleting *v1.Role: csi-mock-volumes-5976-2733/external-attacher-cfg-csi-mock-volumes-5976 Nov 13 05:27:32.555: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-attacher-role-cfg Nov 13 05:27:32.560: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-provisioner Nov 13 05:27:32.564: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5976 Nov 13 05:27:32.567: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5976 Nov 13 05:27:32.570: INFO: deleting *v1.Role: csi-mock-volumes-5976-2733/external-provisioner-cfg-csi-mock-volumes-5976 Nov 13 05:27:32.573: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-provisioner-role-cfg Nov 13 05:27:32.577: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-resizer Nov 13 05:27:32.580: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5976 Nov 13 05:27:32.583: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5976 Nov 13 05:27:32.587: INFO: deleting *v1.Role: csi-mock-volumes-5976-2733/external-resizer-cfg-csi-mock-volumes-5976 Nov 13 05:27:32.590: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5976-2733/csi-resizer-role-cfg Nov 13 05:27:32.593: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-snapshotter Nov 13 05:27:32.596: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5976 Nov 13 05:27:32.599: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5976 Nov 13 05:27:32.603: INFO: deleting *v1.Role: csi-mock-volumes-5976-2733/external-snapshotter-leaderelection-csi-mock-volumes-5976 Nov 13 05:27:32.606: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5976-2733/external-snapshotter-leaderelection Nov 13 05:27:32.609: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5976-2733/csi-mock Nov 13 05:27:32.613: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5976 Nov 13 05:27:32.618: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5976 Nov 13 05:27:32.621: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5976 Nov 13 05:27:32.626: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5976 Nov 13 05:27:32.629: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5976 Nov 13 05:27:32.633: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5976 Nov 13 05:27:32.637: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5976 Nov 13 05:27:32.642: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5976-2733/csi-mockplugin Nov 13 05:27:32.646: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5976 STEP: deleting the driver namespace: csi-mock-volumes-5976-2733 STEP: Waiting for namespaces [csi-mock-volumes-5976-2733] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:16.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.185 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":13,"skipped":688,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:16.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:28:16.737: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:16.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7991" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:16.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-fcabedf4-928d-4030-b642-5f4631ad3fc4 STEP: Creating a pod to test consume configMaps Nov 13 05:28:16.857: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7" in namespace "projected-4475" to be "Succeeded or Failed" Nov 13 05:28:16.859: INFO: Pod "pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371639ms Nov 13 05:28:18.863: INFO: Pod "pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005985385s Nov 13 05:28:20.867: INFO: Pod "pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009902673s STEP: Saw pod success Nov 13 05:28:20.867: INFO: Pod "pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7" satisfied condition "Succeeded or Failed" Nov 13 05:28:20.871: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7 container agnhost-container: STEP: delete the pod Nov 13 05:28:20.910: INFO: Waiting for pod pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7 to disappear Nov 13 05:28:20.913: INFO: Pod pod-projected-configmaps-8d7ad4d7-4ba5-4987-8a69-ecbc05f317f7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:20.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4475" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":744,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:07.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Nov 13 05:28:11.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-7300 exec configmap-client --namespace=volume-7300 -- cat /opt/0/firstfile' Nov 13 05:28:11.890: INFO: stderr: "" Nov 13 05:28:11.890: INFO: stdout: "this is the first file" Nov 13 05:28:11.890: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-7300 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:11.890: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:11.970: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-7300 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:11.971: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:12.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-7300 exec configmap-client --namespace=volume-7300 -- cat /opt/1/secondfile' Nov 13 05:28:12.291: INFO: stderr: "" Nov 13 05:28:12.291: INFO: stdout: "this is the second file" Nov 13 05:28:12.291: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-7300 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:12.291: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:12.358: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-7300 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:12.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-7300 Nov 13 05:28:12.443: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:12.446: INFO: Pod configmap-client still exists Nov 13 05:28:14.448: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:14.451: INFO: Pod configmap-client still exists Nov 13 05:28:16.449: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:16.453: INFO: Pod configmap-client still exists Nov 13 05:28:18.449: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:18.452: INFO: Pod configmap-client still exists Nov 13 05:28:20.446: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:20.449: INFO: Pod configmap-client still exists Nov 13 05:28:22.448: INFO: Waiting for pod configmap-client to disappear Nov 13 05:28:22.450: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:22.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-7300" for this suite. • [SLOW TEST:14.848 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":15,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:20.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:28:22.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend && mount --bind /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend && ln -s /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa] Namespace:persistent-local-volumes-test-8461 PodName:hostexec-node1-nrcww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:22.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:28:23.081: INFO: Creating a PV followed by a PVC Nov 13 05:28:23.088: INFO: Waiting for PV local-pv9c74g to bind to PVC pvc-hk9zq Nov 13 05:28:23.089: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hk9zq] to have phase Bound Nov 13 05:28:23.091: INFO: PersistentVolumeClaim pvc-hk9zq found but phase is Pending instead of Bound. Nov 13 05:28:25.095: INFO: PersistentVolumeClaim pvc-hk9zq found and phase=Bound (2.006074385s) Nov 13 05:28:25.095: INFO: Waiting up to 3m0s for PersistentVolume local-pv9c74g to have phase Bound Nov 13 05:28:25.097: INFO: PersistentVolume local-pv9c74g found and phase=Bound (2.246584ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:28:25.102: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:28:25.104: INFO: Deleting PersistentVolumeClaim "pvc-hk9zq" Nov 13 05:28:25.110: INFO: Deleting PersistentVolume "local-pv9c74g" STEP: Removing the test directory Nov 13 05:28:25.114: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa && umount /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend && rm -r /tmp/local-volume-test-0e8b6efc-24b7-4ad4-b453-1e42d44adbfa-backend] Namespace:persistent-local-volumes-test-8461 PodName:hostexec-node1-nrcww ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:25.114: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:25.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8461" for this suite. S [SKIPPING] [4.295 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:22.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 13 05:28:22.530: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 13 05:28:22.535: INFO: Waiting up to 30s for PersistentVolume hostpath-gxxgl to have phase Available Nov 13 05:28:22.537: INFO: PersistentVolume hostpath-gxxgl found but phase is Pending instead of Available. Nov 13 05:28:23.540: INFO: PersistentVolume hostpath-gxxgl found and phase=Available (1.005127477s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Nov 13 05:28:23.549: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-78cmr] to have phase Bound Nov 13 05:28:23.552: INFO: PersistentVolumeClaim pvc-78cmr found but phase is Pending instead of Bound. Nov 13 05:28:25.555: INFO: PersistentVolumeClaim pvc-78cmr found and phase=Bound (2.005884857s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Nov 13 05:28:25.564: INFO: Waiting up to 3m0s for PersistentVolume hostpath-gxxgl to get deleted Nov 13 05:28:25.567: INFO: PersistentVolume hostpath-gxxgl found and phase=Bound (2.397923ms) Nov 13 05:28:27.571: INFO: PersistentVolume hostpath-gxxgl was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:27.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-8745" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 13 05:28:27.580: INFO: AfterEach: Cleaning up test resources. Nov 13 05:28:27.580: INFO: Deleting PersistentVolumeClaim "pvc-78cmr" Nov 13 05:28:27.582: INFO: Deleting PersistentVolume "hostpath-gxxgl" • [SLOW TEST:5.074 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":16,"skipped":474,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:27.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:28:31.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-01518434-1e6b-4b9d-a650-4263e9a8dd9b-backend && ln -s /tmp/local-volume-test-01518434-1e6b-4b9d-a650-4263e9a8dd9b-backend /tmp/local-volume-test-01518434-1e6b-4b9d-a650-4263e9a8dd9b] Namespace:persistent-local-volumes-test-6788 PodName:hostexec-node2-6snmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:31.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:28:31.724: INFO: Creating a PV followed by a PVC Nov 13 05:28:31.732: INFO: Waiting for PV local-pvnccsd to bind to PVC pvc-vqqd5 Nov 13 05:28:31.732: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vqqd5] to have phase Bound Nov 13 05:28:31.734: INFO: PersistentVolumeClaim pvc-vqqd5 found but phase is Pending instead of Bound. Nov 13 05:28:33.738: INFO: PersistentVolumeClaim pvc-vqqd5 found and phase=Bound (2.005580855s) Nov 13 05:28:33.738: INFO: Waiting up to 3m0s for PersistentVolume local-pvnccsd to have phase Bound Nov 13 05:28:33.740: INFO: PersistentVolume local-pvnccsd found and phase=Bound (2.554531ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:28:37.767: INFO: pod "pod-191ba101-cf1d-487c-8d2f-023fc638e839" created on Node "node2" STEP: Writing in pod1 Nov 13 05:28:37.767: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6788 PodName:pod-191ba101-cf1d-487c-8d2f-023fc638e839 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:37.767: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:37.888: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:28:37.888: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6788 PodName:pod-191ba101-cf1d-487c-8d2f-023fc638e839 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:37.888: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:37.980: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-191ba101-cf1d-487c-8d2f-023fc638e839 in namespace persistent-local-volumes-test-6788 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:28:37.988: INFO: Deleting PersistentVolumeClaim "pvc-vqqd5" Nov 13 05:28:37.991: INFO: Deleting PersistentVolume "local-pvnccsd" STEP: Removing the test directory Nov 13 05:28:37.995: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-01518434-1e6b-4b9d-a650-4263e9a8dd9b && rm -r /tmp/local-volume-test-01518434-1e6b-4b9d-a650-4263e9a8dd9b-backend] Namespace:persistent-local-volumes-test-6788 PodName:hostexec-node2-6snmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:37.995: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:38.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6788" for this suite. • [SLOW TEST:10.529 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":17,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:28.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-7924 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:27:28.314: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-attacher Nov 13 05:27:28.316: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:27:28.317: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:27:28.319: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7924 Nov 13 05:27:28.321: INFO: creating *v1.Role: csi-mock-volumes-7924-1392/external-attacher-cfg-csi-mock-volumes-7924 Nov 13 05:27:28.324: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-attacher-role-cfg Nov 13 05:27:28.326: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-provisioner Nov 13 05:27:28.329: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:27:28.329: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:27:28.332: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7924 Nov 13 05:27:28.335: INFO: creating *v1.Role: csi-mock-volumes-7924-1392/external-provisioner-cfg-csi-mock-volumes-7924 Nov 13 05:27:28.338: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-provisioner-role-cfg Nov 13 05:27:28.340: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-resizer Nov 13 05:27:28.343: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:27:28.343: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:27:28.346: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7924 Nov 13 05:27:28.348: INFO: creating *v1.Role: csi-mock-volumes-7924-1392/external-resizer-cfg-csi-mock-volumes-7924 Nov 13 05:27:28.351: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-resizer-role-cfg Nov 13 05:27:28.354: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-snapshotter Nov 13 05:27:28.356: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:27:28.356: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:27:28.358: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:27:28.361: INFO: creating *v1.Role: csi-mock-volumes-7924-1392/external-snapshotter-leaderelection-csi-mock-volumes-7924 Nov 13 05:27:28.363: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1392/external-snapshotter-leaderelection Nov 13 05:27:28.365: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-mock Nov 13 05:27:28.367: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7924 Nov 13 05:27:28.370: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7924 Nov 13 05:27:28.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:27:28.375: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:27:28.378: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7924 Nov 13 05:27:28.380: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:27:28.383: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7924 Nov 13 05:27:28.385: INFO: creating *v1.StatefulSet: csi-mock-volumes-7924-1392/csi-mockplugin Nov 13 05:27:28.390: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7924 Nov 13 05:27:28.427: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7924" Nov 13 05:27:28.429: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7924 to register on node node1 I1113 05:27:34.474948 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7924","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:27:34.630085 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:27:34.638897 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7924","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:27:34.641155 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I1113 05:27:34.643454 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:27:34.703422 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7924","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:27:37.948: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1113 05:27:37.972919 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:27:40.890692 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I1113 05:27:42.187899 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:27:42.190: INFO: >>> kubeConfig: /root/.kube/config I1113 05:27:42.299087 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9ee65698-15d4-4187-a35c-b19f72450e8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a","storage.kubernetes.io/csiProvisionerIdentity":"1636781254642-8081-csi-mock-csi-mock-volumes-7924"}},"Response":{},"Error":"","FullError":null} I1113 05:27:42.304861 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:27:42.306: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:27:42.410: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:27:42.537: INFO: >>> kubeConfig: /root/.kube/config I1113 05:27:42.619737 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9ee65698-15d4-4187-a35c-b19f72450e8a/globalmount","target_path":"/var/lib/kubelet/pods/b2dd68d4-7f40-47ed-9227-f9e937a5a458/volumes/kubernetes.io~csi/pvc-9ee65698-15d4-4187-a35c-b19f72450e8a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a","storage.kubernetes.io/csiProvisionerIdentity":"1636781254642-8081-csi-mock-csi-mock-volumes-7924"}},"Response":{},"Error":"","FullError":null} Nov 13 05:27:45.969: INFO: Deleting pod "pvc-volume-tester-42ck6" in namespace "csi-mock-volumes-7924" Nov 13 05:27:45.974: INFO: Wait up to 5m0s for pod "pvc-volume-tester-42ck6" to be fully deleted Nov 13 05:27:48.143: INFO: >>> kubeConfig: /root/.kube/config I1113 05:27:48.227590 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b2dd68d4-7f40-47ed-9227-f9e937a5a458/volumes/kubernetes.io~csi/pvc-9ee65698-15d4-4187-a35c-b19f72450e8a/mount"},"Response":{},"Error":"","FullError":null} I1113 05:27:48.247255 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:27:48.249416 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9ee65698-15d4-4187-a35c-b19f72450e8a/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:27:51.995742 36 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:27:52.984: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187327", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001becf48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001becf60)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004c0bcf0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004c0bd00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.984: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187330", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00327fb60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00327fb78)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00327fb90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00327fba8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0036168e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036168f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.984: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187331", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855938), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855950)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855968), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855980)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855998), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0048559b0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0010faa50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010faa90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187335", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0048559c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0048559e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0048559f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855a10)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855a28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855a40)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0010fab50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010fabb0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187408", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004947a28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004947a40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004947a58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004947a70)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004947a88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004947aa0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0040ec800), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040ec810), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187414", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855a70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855a88)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855aa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855ab8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855ad0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855ae8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a", StorageClassName:(*string)(0xc0010fac50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010fac60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187415", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855b18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855b30)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855b48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855b78)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855b90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855ba8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a", StorageClassName:(*string)(0xc0010fad40), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010fae00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187588", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004855bd8), DeletionGracePeriodSeconds:(*int64)(0xc003a1cb18), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855c08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855c38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855c50)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004855c68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004855c80)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a", StorageClassName:(*string)(0xc0010faec0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0010faed0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:27:52.985: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vgmqf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7924", SelfLink:"", UID:"9ee65698-15d4-4187-a35c-b19f72450e8a", ResourceVersion:"187589", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378057, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc0043f0750), DeletionGracePeriodSeconds:(*int64)(0xc003459598), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7924", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043f0768), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043f0780)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043f0798), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043f07b0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043f07c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043f07e0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9ee65698-15d4-4187-a35c-b19f72450e8a", StorageClassName:(*string)(0xc000bd1530), VolumeMode:(*v1.PersistentVolumeMode)(0xc000bd1550), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-42ck6 Nov 13 05:27:52.986: INFO: Deleting pod "pvc-volume-tester-42ck6" in namespace "csi-mock-volumes-7924" STEP: Deleting claim pvc-vgmqf STEP: Deleting storageclass csi-mock-volumes-7924-scwq4xp STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7924 STEP: Waiting for namespaces [csi-mock-volumes-7924] to vanish STEP: uninstalling csi mock driver Nov 13 05:27:59.018: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-attacher Nov 13 05:27:59.022: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:27:59.025: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7924 Nov 13 05:27:59.029: INFO: deleting *v1.Role: csi-mock-volumes-7924-1392/external-attacher-cfg-csi-mock-volumes-7924 Nov 13 05:27:59.033: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-attacher-role-cfg Nov 13 05:27:59.036: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-provisioner Nov 13 05:27:59.040: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:27:59.043: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7924 Nov 13 05:27:59.046: INFO: deleting *v1.Role: csi-mock-volumes-7924-1392/external-provisioner-cfg-csi-mock-volumes-7924 Nov 13 05:27:59.050: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-provisioner-role-cfg Nov 13 05:27:59.053: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-resizer Nov 13 05:27:59.056: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:27:59.060: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7924 Nov 13 05:27:59.064: INFO: deleting *v1.Role: csi-mock-volumes-7924-1392/external-resizer-cfg-csi-mock-volumes-7924 Nov 13 05:27:59.067: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1392/csi-resizer-role-cfg Nov 13 05:27:59.071: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-snapshotter Nov 13 05:27:59.074: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:27:59.077: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:27:59.081: INFO: deleting *v1.Role: csi-mock-volumes-7924-1392/external-snapshotter-leaderelection-csi-mock-volumes-7924 Nov 13 05:27:59.084: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1392/external-snapshotter-leaderelection Nov 13 05:27:59.087: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1392/csi-mock Nov 13 05:27:59.090: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7924 Nov 13 05:27:59.093: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7924 Nov 13 05:27:59.096: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:27:59.100: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:27:59.103: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7924 Nov 13 05:27:59.106: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:27:59.109: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7924 Nov 13 05:27:59.112: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7924-1392/csi-mockplugin Nov 13 05:27:59.116: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7924 STEP: deleting the driver namespace: csi-mock-volumes-7924-1392 STEP: Waiting for namespaces [csi-mock-volumes-7924-1392] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:43.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:74.888 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":14,"skipped":475,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:38.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428" Nov 13 05:28:40.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428 && dd if=/dev/zero of=/tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428/file] Namespace:persistent-local-volumes-test-2272 PodName:hostexec-node2-684bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:40.222: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:40.328: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2272 PodName:hostexec-node2-684bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:40.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:28:40.485: INFO: Creating a PV followed by a PVC Nov 13 05:28:40.492: INFO: Waiting for PV local-pvxnwtt to bind to PVC pvc-6x2tg Nov 13 05:28:40.492: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6x2tg] to have phase Bound Nov 13 05:28:40.494: INFO: PersistentVolumeClaim pvc-6x2tg found but phase is Pending instead of Bound. Nov 13 05:28:42.499: INFO: PersistentVolumeClaim pvc-6x2tg found and phase=Bound (2.006621869s) Nov 13 05:28:42.499: INFO: Waiting up to 3m0s for PersistentVolume local-pvxnwtt to have phase Bound Nov 13 05:28:42.501: INFO: PersistentVolume local-pvxnwtt found and phase=Bound (2.695081ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:28:46.530: INFO: pod "pod-819309fc-eb20-4aeb-8722-8d77f0ee0e5a" created on Node "node2" STEP: Writing in pod1 Nov 13 05:28:46.530: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2272 PodName:pod-819309fc-eb20-4aeb-8722-8d77f0ee0e5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:46.530: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:46.609: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000175 seconds, 100.4KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:28:46.609: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-2272 PodName:pod-819309fc-eb20-4aeb-8722-8d77f0ee0e5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:28:46.609: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:46.685: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-819309fc-eb20-4aeb-8722-8d77f0ee0e5a in namespace persistent-local-volumes-test-2272 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:28:46.691: INFO: Deleting PersistentVolumeClaim "pvc-6x2tg" Nov 13 05:28:46.697: INFO: Deleting PersistentVolume "local-pvxnwtt" Nov 13 05:28:46.701: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2272 PodName:hostexec-node2-684bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:46.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428/file Nov 13 05:28:46.808: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2272 PodName:hostexec-node2-684bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:46.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428 Nov 13 05:28:46.911: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dad55e0f-b6cf-4bc2-ae26-cb7ff0eac428] Namespace:persistent-local-volumes-test-2272 PodName:hostexec-node2-684bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:46.911: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:47.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2272" for this suite. • [SLOW TEST:8.851 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:43.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:28:43.180: INFO: The status of Pod test-hostpath-type-2t4nd is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:28:45.184: INFO: The status of Pod test-hostpath-type-2t4nd is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:28:47.185: INFO: The status of Pod test-hostpath-type-2t4nd is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:55.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-250" for this suite. • [SLOW TEST:12.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":15,"skipped":477,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:23:59.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-20406c21-7b96-4a63-8498-156c4e7a65fd STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:28:59.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7126" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":8,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:25.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 13 05:28:29.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e7fcda79-4567-44f6-9e75-d69764bf44fe] Namespace:persistent-local-volumes-test-2469 PodName:hostexec-node2-n6sct ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:29.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:28:29.415: INFO: Creating a PV followed by a PVC Nov 13 05:28:29.422: INFO: Waiting for PV local-pvdk7jv to bind to PVC pvc-f824p Nov 13 05:28:29.422: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-f824p] to have phase Bound Nov 13 05:28:29.424: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:31.429: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:33.435: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:35.438: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:37.443: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:39.447: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:41.452: INFO: PersistentVolumeClaim pvc-f824p found but phase is Pending instead of Bound. Nov 13 05:28:43.456: INFO: PersistentVolumeClaim pvc-f824p found and phase=Bound (14.0334745s) Nov 13 05:28:43.456: INFO: Waiting up to 3m0s for PersistentVolume local-pvdk7jv to have phase Bound Nov 13 05:28:43.458: INFO: PersistentVolume local-pvdk7jv found and phase=Bound (2.152908ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 13 05:28:43.462: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b7714ff5-2314-4ce1-978f-7f87369fcc86] Namespace:persistent-local-volumes-test-2469 PodName:hostexec-node2-n6sct ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:43.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:28:43.802: INFO: Creating a PV followed by a PVC Nov 13 05:28:43.808: INFO: Waiting for PV local-pvhz4st to bind to PVC pvc-xxk6r Nov 13 05:28:43.808: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xxk6r] to have phase Bound Nov 13 05:28:43.810: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:45.813: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:47.818: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:49.823: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:51.828: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:53.831: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:55.836: INFO: PersistentVolumeClaim pvc-xxk6r found but phase is Pending instead of Bound. Nov 13 05:28:57.840: INFO: PersistentVolumeClaim pvc-xxk6r found and phase=Bound (14.031983593s) Nov 13 05:28:57.840: INFO: Waiting up to 3m0s for PersistentVolume local-pvhz4st to have phase Bound Nov 13 05:28:57.842: INFO: PersistentVolume local-pvhz4st found and phase=Bound (1.921875ms) Nov 13 05:28:57.859: INFO: Waiting up to 5m0s for pod "pod-7150589b-1cef-409e-ae1a-59eec9dfe85a" in namespace "persistent-local-volumes-test-2469" to be "Unschedulable" Nov 13 05:28:57.861: INFO: Pod "pod-7150589b-1cef-409e-ae1a-59eec9dfe85a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313886ms Nov 13 05:28:59.865: INFO: Pod "pod-7150589b-1cef-409e-ae1a-59eec9dfe85a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006452231s Nov 13 05:28:59.866: INFO: Pod "pod-7150589b-1cef-409e-ae1a-59eec9dfe85a" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 13 05:28:59.866: INFO: Deleting PersistentVolumeClaim "pvc-f824p" Nov 13 05:28:59.871: INFO: Deleting PersistentVolume "local-pvdk7jv" STEP: Removing the test directory Nov 13 05:28:59.875: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7fcda79-4567-44f6-9e75-d69764bf44fe] Namespace:persistent-local-volumes-test-2469 PodName:hostexec-node2-n6sct ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:59.875: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:00.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2469" for this suite. • [SLOW TEST:34.825 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":15,"skipped":753,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:03.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:03.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7239" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":3,"skipped":198,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:59.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:28:59.597: INFO: The status of Pod test-hostpath-type-85tv8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:01.600: INFO: The status of Pod test-hostpath-type-85tv8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:03.606: INFO: The status of Pod test-hostpath-type-85tv8 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:09.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-3113" for this suite. • [SLOW TEST:10.110 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":9,"skipped":311,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:55.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887" Nov 13 05:28:59.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887 && dd if=/dev/zero of=/tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887/file] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:59.320: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:59.666: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:59.666: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:28:59.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887 && chmod o+rwx /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:28:59.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:00.027: INFO: Creating a PV followed by a PVC Nov 13 05:29:00.033: INFO: Waiting for PV local-pv78zgv to bind to PVC pvc-ngp8q Nov 13 05:29:00.033: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ngp8q] to have phase Bound Nov 13 05:29:00.035: INFO: PersistentVolumeClaim pvc-ngp8q found but phase is Pending instead of Bound. Nov 13 05:29:02.042: INFO: PersistentVolumeClaim pvc-ngp8q found and phase=Bound (2.00833642s) Nov 13 05:29:02.042: INFO: Waiting up to 3m0s for PersistentVolume local-pv78zgv to have phase Bound Nov 13 05:29:02.044: INFO: PersistentVolume local-pv78zgv found and phase=Bound (2.122148ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:29:06.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2227 exec pod-dd05a624-2b2d-4718-95fd-e78cc059e16b --namespace=persistent-local-volumes-test-2227 -- stat -c %g /mnt/volume1' Nov 13 05:29:06.341: INFO: stderr: "" Nov 13 05:29:06.341: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:29:10.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2227 exec pod-d5be2d54-54b0-404e-80f0-2862bbfcd398 --namespace=persistent-local-volumes-test-2227 -- stat -c %g /mnt/volume1' Nov 13 05:29:11.176: INFO: stderr: "" Nov 13 05:29:11.176: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-dd05a624-2b2d-4718-95fd-e78cc059e16b in namespace persistent-local-volumes-test-2227 STEP: Deleting second pod STEP: Deleting pod pod-d5be2d54-54b0-404e-80f0-2862bbfcd398 in namespace persistent-local-volumes-test-2227 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:29:11.185: INFO: Deleting PersistentVolumeClaim "pvc-ngp8q" Nov 13 05:29:11.189: INFO: Deleting PersistentVolume "local-pv78zgv" Nov 13 05:29:11.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:11.193: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:11.288: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:11.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887/file Nov 13 05:29:11.381: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:11.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887 Nov 13 05:29:11.493: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad38b763-b933-4f4b-84cb-a662c83d3887] Namespace:persistent-local-volumes-test-2227 PodName:hostexec-node2-ld2vl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:11.493: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:11.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2227" for this suite. • [SLOW TEST:16.386 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:09.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Nov 13 05:29:09.721: INFO: Waiting up to 5m0s for pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba" in namespace "downward-api-3627" to be "Succeeded or Failed" Nov 13 05:29:09.724: INFO: Pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.614594ms Nov 13 05:29:11.731: INFO: Pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009324648s Nov 13 05:29:13.734: INFO: Pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013228019s Nov 13 05:29:15.738: INFO: Pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016773543s STEP: Saw pod success Nov 13 05:29:15.738: INFO: Pod "metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba" satisfied condition "Succeeded or Failed" Nov 13 05:29:15.743: INFO: Trying to get logs from node node2 pod metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba container client-container: STEP: delete the pod Nov 13 05:29:15.754: INFO: Waiting for pod metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba to disappear Nov 13 05:29:15.756: INFO: Pod metadata-volume-3d29396e-d408-4a36-ba2b-9dfe8be898ba no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:15.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3627" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":317,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:15.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Nov 13 05:29:15.801: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:15.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-8334" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:15.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:29:15.871: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:15.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3236" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:29:15.881: INFO: AfterEach: Cleaning up test resources Nov 13 05:29:15.881: INFO: pvc is nil Nov 13 05:29:15.881: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:03.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:29:07.384: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-782f4ea6-89f0-4d48-811a-0cbeaeae679d && mount --bind /tmp/local-volume-test-782f4ea6-89f0-4d48-811a-0cbeaeae679d /tmp/local-volume-test-782f4ea6-89f0-4d48-811a-0cbeaeae679d] Namespace:persistent-local-volumes-test-78 PodName:hostexec-node1-ppzr2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:07.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:07.483: INFO: Creating a PV followed by a PVC Nov 13 05:29:07.491: INFO: Waiting for PV local-pvgnpch to bind to PVC pvc-mq5cv Nov 13 05:29:07.491: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mq5cv] to have phase Bound Nov 13 05:29:07.494: INFO: PersistentVolumeClaim pvc-mq5cv found but phase is Pending instead of Bound. Nov 13 05:29:09.498: INFO: PersistentVolumeClaim pvc-mq5cv found and phase=Bound (2.006547728s) Nov 13 05:29:09.498: INFO: Waiting up to 3m0s for PersistentVolume local-pvgnpch to have phase Bound Nov 13 05:29:09.500: INFO: PersistentVolume local-pvgnpch found and phase=Bound (2.410169ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:29:15.525: INFO: pod "pod-f27ada0a-ffee-45eb-b862-344e9d76e1a2" created on Node "node1" STEP: Writing in pod1 Nov 13 05:29:15.525: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-78 PodName:pod-f27ada0a-ffee-45eb-b862-344e9d76e1a2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:15.525: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:15.721: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:29:15.721: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-78 PodName:pod-f27ada0a-ffee-45eb-b862-344e9d76e1a2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:15.721: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:15.951: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f27ada0a-ffee-45eb-b862-344e9d76e1a2 in namespace persistent-local-volumes-test-78 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:29:21.978: INFO: pod "pod-50d25664-1ef1-4935-8e8c-2e5d5608b41b" created on Node "node1" STEP: Reading in pod2 Nov 13 05:29:21.978: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-78 PodName:pod-50d25664-1ef1-4935-8e8c-2e5d5608b41b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:21.978: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.091: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-50d25664-1ef1-4935-8e8c-2e5d5608b41b in namespace persistent-local-volumes-test-78 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:29:22.098: INFO: Deleting PersistentVolumeClaim "pvc-mq5cv" Nov 13 05:29:22.101: INFO: Deleting PersistentVolume "local-pvgnpch" STEP: Removing the test directory Nov 13 05:29:22.106: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-782f4ea6-89f0-4d48-811a-0cbeaeae679d && rm -r /tmp/local-volume-test-782f4ea6-89f0-4d48-811a-0cbeaeae679d] Namespace:persistent-local-volumes-test-78 PodName:hostexec-node1-ppzr2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.106: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-78" for this suite. • [SLOW TEST:18.889 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":230,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:22.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517" Nov 13 05:29:26.327: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517 && dd if=/dev/zero of=/tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517/file] Namespace:persistent-local-volumes-test-4368 PodName:hostexec-node2-ch9ln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:26.327: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:26.469: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4368 PodName:hostexec-node2-ch9ln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:26.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:26.559: INFO: Creating a PV followed by a PVC Nov 13 05:29:26.566: INFO: Waiting for PV local-pvdkz7h to bind to PVC pvc-695lg Nov 13 05:29:26.566: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-695lg] to have phase Bound Nov 13 05:29:26.568: INFO: PersistentVolumeClaim pvc-695lg found but phase is Pending instead of Bound. Nov 13 05:29:28.572: INFO: PersistentVolumeClaim pvc-695lg found and phase=Bound (2.006705844s) Nov 13 05:29:28.572: INFO: Waiting up to 3m0s for PersistentVolume local-pvdkz7h to have phase Bound Nov 13 05:29:28.575: INFO: PersistentVolume local-pvdkz7h found and phase=Bound (2.780965ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:29:32.606: INFO: pod "pod-58e8de17-ab69-4979-b321-9917bdfcb300" created on Node "node2" STEP: Writing in pod1 Nov 13 05:29:32.606: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4368 PodName:pod-58e8de17-ab69-4979-b321-9917bdfcb300 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:32.606: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:32.709: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000155 seconds, 113.4KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:29:32.709: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-4368 PodName:pod-58e8de17-ab69-4979-b321-9917bdfcb300 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:32.709: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:32.811: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Nov 13 05:29:32.812: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4368 PodName:pod-58e8de17-ab69-4979-b321-9917bdfcb300 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:32.812: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:32.911: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000040 seconds, 268.6KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-58e8de17-ab69-4979-b321-9917bdfcb300 in namespace persistent-local-volumes-test-4368 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:29:32.916: INFO: Deleting PersistentVolumeClaim "pvc-695lg" Nov 13 05:29:32.922: INFO: Deleting PersistentVolume "local-pvdkz7h" Nov 13 05:29:32.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4368 PodName:hostexec-node2-ch9ln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:32.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517/file Nov 13 05:29:33.071: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4368 PodName:hostexec-node2-ch9ln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:33.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517 Nov 13 05:29:33.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cd916a77-140f-4ae7-952a-e93481603517] Namespace:persistent-local-volumes-test-4368 PodName:hostexec-node2-ch9ln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:33.184: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4368" for this suite. • [SLOW TEST:11.126 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":253,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:24:59.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 STEP: Building a driver namespace object, basename csi-mock-volumes-8287 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:24:59.586: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-attacher Nov 13 05:24:59.590: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8287 Nov 13 05:24:59.590: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8287 Nov 13 05:24:59.594: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8287 Nov 13 05:24:59.596: INFO: creating *v1.Role: csi-mock-volumes-8287-8146/external-attacher-cfg-csi-mock-volumes-8287 Nov 13 05:24:59.598: INFO: creating *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-attacher-role-cfg Nov 13 05:24:59.601: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-provisioner Nov 13 05:24:59.603: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8287 Nov 13 05:24:59.604: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8287 Nov 13 05:24:59.607: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8287 Nov 13 05:24:59.610: INFO: creating *v1.Role: csi-mock-volumes-8287-8146/external-provisioner-cfg-csi-mock-volumes-8287 Nov 13 05:24:59.613: INFO: creating *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-provisioner-role-cfg Nov 13 05:24:59.619: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-resizer Nov 13 05:24:59.625: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8287 Nov 13 05:24:59.625: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8287 Nov 13 05:24:59.628: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8287 Nov 13 05:24:59.633: INFO: creating *v1.Role: csi-mock-volumes-8287-8146/external-resizer-cfg-csi-mock-volumes-8287 Nov 13 05:24:59.636: INFO: creating *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-resizer-role-cfg Nov 13 05:24:59.639: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-snapshotter Nov 13 05:24:59.641: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8287 Nov 13 05:24:59.641: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8287 Nov 13 05:24:59.644: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8287 Nov 13 05:24:59.646: INFO: creating *v1.Role: csi-mock-volumes-8287-8146/external-snapshotter-leaderelection-csi-mock-volumes-8287 Nov 13 05:24:59.649: INFO: creating *v1.RoleBinding: csi-mock-volumes-8287-8146/external-snapshotter-leaderelection Nov 13 05:24:59.651: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-mock Nov 13 05:24:59.654: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8287 Nov 13 05:24:59.656: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8287 Nov 13 05:24:59.659: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8287 Nov 13 05:24:59.661: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8287 Nov 13 05:24:59.664: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8287 Nov 13 05:24:59.666: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8287 Nov 13 05:24:59.669: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8287 Nov 13 05:24:59.672: INFO: creating *v1.StatefulSet: csi-mock-volumes-8287-8146/csi-mockplugin Nov 13 05:24:59.676: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8287 to register on node node1 STEP: Creating pod Nov 13 05:25:09.194: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:25:09.198: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-56mzg] to have phase Bound Nov 13 05:25:09.200: INFO: PersistentVolumeClaim pvc-56mzg found but phase is Pending instead of Bound. Nov 13 05:25:11.205: INFO: PersistentVolumeClaim pvc-56mzg found and phase=Bound (2.006424042s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Nov 13 05:27:13.234: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8287 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-7f6zg Nov 13 05:29:13.250: INFO: Deleting pod "pvc-volume-tester-7f6zg" in namespace "csi-mock-volumes-8287" Nov 13 05:29:13.254: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7f6zg" to be fully deleted STEP: Deleting claim pvc-56mzg Nov 13 05:29:23.267: INFO: Waiting up to 2m0s for PersistentVolume pvc-22e985f0-9ef0-40ec-8d36-afaf4875d28c to get deleted Nov 13 05:29:23.269: INFO: PersistentVolume pvc-22e985f0-9ef0-40ec-8d36-afaf4875d28c found and phase=Bound (2.777299ms) Nov 13 05:29:25.273: INFO: PersistentVolume pvc-22e985f0-9ef0-40ec-8d36-afaf4875d28c was removed STEP: Deleting storageclass csi-mock-volumes-8287-scchbxv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8287 STEP: Waiting for namespaces [csi-mock-volumes-8287] to vanish STEP: uninstalling csi mock driver Nov 13 05:29:31.286: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-attacher Nov 13 05:29:31.290: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8287 Nov 13 05:29:31.293: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8287 Nov 13 05:29:31.297: INFO: deleting *v1.Role: csi-mock-volumes-8287-8146/external-attacher-cfg-csi-mock-volumes-8287 Nov 13 05:29:31.301: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-attacher-role-cfg Nov 13 05:29:31.304: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-provisioner Nov 13 05:29:31.308: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8287 Nov 13 05:29:31.311: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8287 Nov 13 05:29:31.314: INFO: deleting *v1.Role: csi-mock-volumes-8287-8146/external-provisioner-cfg-csi-mock-volumes-8287 Nov 13 05:29:31.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-provisioner-role-cfg Nov 13 05:29:31.327: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-resizer Nov 13 05:29:31.334: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8287 Nov 13 05:29:31.344: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8287 Nov 13 05:29:31.348: INFO: deleting *v1.Role: csi-mock-volumes-8287-8146/external-resizer-cfg-csi-mock-volumes-8287 Nov 13 05:29:31.358: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8287-8146/csi-resizer-role-cfg Nov 13 05:29:31.361: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-snapshotter Nov 13 05:29:31.365: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8287 Nov 13 05:29:31.368: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8287 Nov 13 05:29:31.372: INFO: deleting *v1.Role: csi-mock-volumes-8287-8146/external-snapshotter-leaderelection-csi-mock-volumes-8287 Nov 13 05:29:31.375: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8287-8146/external-snapshotter-leaderelection Nov 13 05:29:31.378: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8287-8146/csi-mock Nov 13 05:29:31.381: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8287 Nov 13 05:29:31.384: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8287 Nov 13 05:29:31.387: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8287 Nov 13 05:29:31.390: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8287 Nov 13 05:29:31.393: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8287 Nov 13 05:29:31.396: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8287 Nov 13 05:29:31.400: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8287 Nov 13 05:29:31.403: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8287-8146/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-8287-8146 STEP: Waiting for namespaces [csi-mock-volumes-8287-8146] to vanish Nov 13 05:29:43.416: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8287 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:283.897 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:372 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":15,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:28:47.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5418 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:28:47.202: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-attacher Nov 13 05:28:47.206: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5418 Nov 13 05:28:47.206: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5418 Nov 13 05:28:47.208: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5418 Nov 13 05:28:47.212: INFO: creating *v1.Role: csi-mock-volumes-5418-1338/external-attacher-cfg-csi-mock-volumes-5418 Nov 13 05:28:47.215: INFO: creating *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-attacher-role-cfg Nov 13 05:28:47.217: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-provisioner Nov 13 05:28:47.220: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5418 Nov 13 05:28:47.220: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5418 Nov 13 05:28:47.222: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5418 Nov 13 05:28:47.225: INFO: creating *v1.Role: csi-mock-volumes-5418-1338/external-provisioner-cfg-csi-mock-volumes-5418 Nov 13 05:28:47.228: INFO: creating *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-provisioner-role-cfg Nov 13 05:28:47.232: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-resizer Nov 13 05:28:47.235: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5418 Nov 13 05:28:47.235: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5418 Nov 13 05:28:47.238: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5418 Nov 13 05:28:47.241: INFO: creating *v1.Role: csi-mock-volumes-5418-1338/external-resizer-cfg-csi-mock-volumes-5418 Nov 13 05:28:47.243: INFO: creating *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-resizer-role-cfg Nov 13 05:28:47.246: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-snapshotter Nov 13 05:28:47.248: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5418 Nov 13 05:28:47.248: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5418 Nov 13 05:28:47.251: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5418 Nov 13 05:28:47.253: INFO: creating *v1.Role: csi-mock-volumes-5418-1338/external-snapshotter-leaderelection-csi-mock-volumes-5418 Nov 13 05:28:47.255: INFO: creating *v1.RoleBinding: csi-mock-volumes-5418-1338/external-snapshotter-leaderelection Nov 13 05:28:47.257: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-mock Nov 13 05:28:47.261: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5418 Nov 13 05:28:47.263: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5418 Nov 13 05:28:47.266: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5418 Nov 13 05:28:47.269: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5418 Nov 13 05:28:47.271: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5418 Nov 13 05:28:47.273: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5418 Nov 13 05:28:47.276: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5418 Nov 13 05:28:47.278: INFO: creating *v1.StatefulSet: csi-mock-volumes-5418-1338/csi-mockplugin Nov 13 05:28:47.286: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5418 Nov 13 05:28:47.289: INFO: creating *v1.StatefulSet: csi-mock-volumes-5418-1338/csi-mockplugin-attacher Nov 13 05:28:47.292: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5418" Nov 13 05:28:47.294: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5418 to register on node node2 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Nov 13 05:29:00.841: INFO: Pod inline-volume-kz76w has the following logs: Nov 13 05:29:00.844: INFO: Deleting pod "inline-volume-kz76w" in namespace "csi-mock-volumes-5418" Nov 13 05:29:00.848: INFO: Wait up to 5m0s for pod "inline-volume-kz76w" to be fully deleted STEP: Deleting the previously created pod Nov 13 05:29:02.855: INFO: Deleting pod "pvc-volume-tester-gpr4w" in namespace "csi-mock-volumes-5418" Nov 13 05:29:02.860: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gpr4w" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:29:12.912: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Nov 13 05:29:12.912: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-gpr4w Nov 13 05:29:12.912: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5418 Nov 13 05:29:12.912: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 559fcca0-c339-41c6-b1fd-f20478a8f389 Nov 13 05:29:12.912: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 13 05:29:12.912: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-92e009fbefacf82ae251f161ef93927dc54ad3bfef52c6d423c6da8b31363e0c","target_path":"/var/lib/kubelet/pods/559fcca0-c339-41c6-b1fd-f20478a8f389/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-gpr4w Nov 13 05:29:12.912: INFO: Deleting pod "pvc-volume-tester-gpr4w" in namespace "csi-mock-volumes-5418" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5418 STEP: Waiting for namespaces [csi-mock-volumes-5418] to vanish STEP: uninstalling csi mock driver Nov 13 05:29:18.923: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-attacher Nov 13 05:29:18.926: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5418 Nov 13 05:29:18.930: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5418 Nov 13 05:29:18.933: INFO: deleting *v1.Role: csi-mock-volumes-5418-1338/external-attacher-cfg-csi-mock-volumes-5418 Nov 13 05:29:18.937: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-attacher-role-cfg Nov 13 05:29:18.941: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-provisioner Nov 13 05:29:18.944: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5418 Nov 13 05:29:18.948: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5418 Nov 13 05:29:18.951: INFO: deleting *v1.Role: csi-mock-volumes-5418-1338/external-provisioner-cfg-csi-mock-volumes-5418 Nov 13 05:29:18.954: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-provisioner-role-cfg Nov 13 05:29:18.958: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-resizer Nov 13 05:29:18.962: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5418 Nov 13 05:29:18.965: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5418 Nov 13 05:29:18.969: INFO: deleting *v1.Role: csi-mock-volumes-5418-1338/external-resizer-cfg-csi-mock-volumes-5418 Nov 13 05:29:18.972: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5418-1338/csi-resizer-role-cfg Nov 13 05:29:18.975: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-snapshotter Nov 13 05:29:18.978: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5418 Nov 13 05:29:18.981: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5418 Nov 13 05:29:18.984: INFO: deleting *v1.Role: csi-mock-volumes-5418-1338/external-snapshotter-leaderelection-csi-mock-volumes-5418 Nov 13 05:29:18.988: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5418-1338/external-snapshotter-leaderelection Nov 13 05:29:18.991: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5418-1338/csi-mock Nov 13 05:29:18.995: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5418 Nov 13 05:29:18.998: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5418 Nov 13 05:29:19.001: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5418 Nov 13 05:29:19.005: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5418 Nov 13 05:29:19.008: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5418 Nov 13 05:29:19.011: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5418 Nov 13 05:29:19.019: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5418 Nov 13 05:29:19.027: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5418-1338/csi-mockplugin Nov 13 05:29:19.034: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5418 Nov 13 05:29:19.041: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5418-1338/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5418-1338 STEP: Waiting for namespaces [csi-mock-volumes-5418-1338] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:47.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:59.925 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":19,"skipped":553,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:15.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:29:21.951: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-17d27605-98b3-4011-ae53-48b6c31595fc] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:21.951: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.074: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5e79f376-5d33-4056-8fec-fffcc090a253] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.075: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-81a6529d-ed45-46a6-a544-75323537a19c] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.180: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.279: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7ec54a64-6161-486a-9f77-27091fc583b8] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.279: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c9c1a35c-0fbb-4476-9571-a12b5aa6dedd] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.382: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:22.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-53840c5e-71b5-4d16-92f8-c5416c950c9f] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:22.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:22.553: INFO: Creating a PV followed by a PVC Nov 13 05:29:22.559: INFO: Creating a PV followed by a PVC Nov 13 05:29:22.564: INFO: Creating a PV followed by a PVC Nov 13 05:29:22.571: INFO: Creating a PV followed by a PVC Nov 13 05:29:22.576: INFO: Creating a PV followed by a PVC Nov 13 05:29:22.582: INFO: Creating a PV followed by a PVC Nov 13 05:29:32.623: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:29:36.638: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bd6798bf-4490-4f44-b412-49985f9cf208] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:36.638: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:36.718: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cfd69b28-51d5-4847-8977-750a68898407] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:36.718: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:36.797: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-098cfda6-6630-4fc0-ac38-49e647b90d3c] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:36.797: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:36.882: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8754c8d1-8517-4022-a195-6d20f5e2232a] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:36.882: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:36.988: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-77858d48-fda9-40e7-9b4d-9a0cd3ea5b29] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:36.989: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:37.081: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-19e28907-13fb-4d86-8b89-feb02b9ba3ff] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:37.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:37.167: INFO: Creating a PV followed by a PVC Nov 13 05:29:37.175: INFO: Creating a PV followed by a PVC Nov 13 05:29:37.181: INFO: Creating a PV followed by a PVC Nov 13 05:29:37.187: INFO: Creating a PV followed by a PVC Nov 13 05:29:37.193: INFO: Creating a PV followed by a PVC Nov 13 05:29:37.199: INFO: Creating a PV followed by a PVC Nov 13 05:29:47.247: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Nov 13 05:29:47.247: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:29:47.248: INFO: Deleting PersistentVolumeClaim "pvc-6ft72" Nov 13 05:29:47.253: INFO: Deleting PersistentVolume "local-pvjgstl" STEP: Cleaning up PVC and PV Nov 13 05:29:47.257: INFO: Deleting PersistentVolumeClaim "pvc-4l5hm" Nov 13 05:29:47.261: INFO: Deleting PersistentVolume "local-pvfvhnb" STEP: Cleaning up PVC and PV Nov 13 05:29:47.264: INFO: Deleting PersistentVolumeClaim "pvc-2tk9n" Nov 13 05:29:47.268: INFO: Deleting PersistentVolume "local-pv6z7fd" STEP: Cleaning up PVC and PV Nov 13 05:29:47.272: INFO: Deleting PersistentVolumeClaim "pvc-gbwk4" Nov 13 05:29:47.275: INFO: Deleting PersistentVolume "local-pvflpng" STEP: Cleaning up PVC and PV Nov 13 05:29:47.278: INFO: Deleting PersistentVolumeClaim "pvc-rkg2b" Nov 13 05:29:47.282: INFO: Deleting PersistentVolume "local-pvm6szs" STEP: Cleaning up PVC and PV Nov 13 05:29:47.285: INFO: Deleting PersistentVolumeClaim "pvc-w9nns" Nov 13 05:29:47.289: INFO: Deleting PersistentVolume "local-pvx8mpv" STEP: Removing the test directory Nov 13 05:29:47.292: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17d27605-98b3-4011-ae53-48b6c31595fc] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:47.384: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5e79f376-5d33-4056-8fec-fffcc090a253] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:47.467: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-81a6529d-ed45-46a6-a544-75323537a19c] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:47.567: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7ec54a64-6161-486a-9f77-27091fc583b8] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:47.679: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9c1a35c-0fbb-4476-9571-a12b5aa6dedd] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:47.764: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-53840c5e-71b5-4d16-92f8-c5416c950c9f] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node1-jgqz6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:29:47.861: INFO: Deleting PersistentVolumeClaim "pvc-59mbb" Nov 13 05:29:47.865: INFO: Deleting PersistentVolume "local-pvt4wl4" STEP: Cleaning up PVC and PV Nov 13 05:29:47.869: INFO: Deleting PersistentVolumeClaim "pvc-6mnfg" Nov 13 05:29:47.873: INFO: Deleting PersistentVolume "local-pvc7mq9" STEP: Cleaning up PVC and PV Nov 13 05:29:47.877: INFO: Deleting PersistentVolumeClaim "pvc-cd5rk" Nov 13 05:29:47.881: INFO: Deleting PersistentVolume "local-pv67fkn" STEP: Cleaning up PVC and PV Nov 13 05:29:47.885: INFO: Deleting PersistentVolumeClaim "pvc-8n2nf" Nov 13 05:29:47.888: INFO: Deleting PersistentVolume "local-pvrhx6x" STEP: Cleaning up PVC and PV Nov 13 05:29:47.892: INFO: Deleting PersistentVolumeClaim "pvc-hw54f" Nov 13 05:29:47.895: INFO: Deleting PersistentVolume "local-pvfp7fl" STEP: Cleaning up PVC and PV Nov 13 05:29:47.898: INFO: Deleting PersistentVolumeClaim "pvc-qjr47" Nov 13 05:29:47.902: INFO: Deleting PersistentVolume "local-pvv2nfw" STEP: Removing the test directory Nov 13 05:29:47.905: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bd6798bf-4490-4f44-b412-49985f9cf208] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:47.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:48.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cfd69b28-51d5-4847-8977-750a68898407] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:48.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:48.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-098cfda6-6630-4fc0-ac38-49e647b90d3c] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:48.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:48.677: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8754c8d1-8517-4022-a195-6d20f5e2232a] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:48.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:48.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-77858d48-fda9-40e7-9b4d-9a0cd3ea5b29] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:48.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:29:48.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-19e28907-13fb-4d86-8b89-feb02b9ba3ff] Namespace:persistent-local-volumes-test-8247 PodName:hostexec-node2-bpsn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:48.837: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:48.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8247" for this suite. S [SKIPPING] [33.070 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":16,"skipped":484,"failed":0} [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:11.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:29:41.693: INFO: Deleting pod "pv-2803"/"pod-ephm-test-projected-8fpd" Nov 13 05:29:41.693: INFO: Deleting pod "pod-ephm-test-projected-8fpd" in namespace "pv-2803" Nov 13 05:29:41.698: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-8fpd" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:51.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2803" for this suite. • [SLOW TEST:40.056 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":17,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:51.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:29:51.785: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:51.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8541" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:47.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:29:47.128: INFO: The status of Pod test-hostpath-type-f9wbn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:49.133: INFO: The status of Pod test-hostpath-type-f9wbn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:51.131: INFO: The status of Pod test-hostpath-type-f9wbn is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:29:51.134: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6767 PodName:test-hostpath-type-f9wbn ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:29:51.134: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:55.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6767" for this suite. • [SLOW TEST:8.167 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":20,"skipped":564,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:51.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-cd033ff0-de5b-4116-a326-484aa69e180a STEP: Creating a pod to test consume configMaps Nov 13 05:29:51.881: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee" in namespace "projected-7424" to be "Succeeded or Failed" Nov 13 05:29:51.883: INFO: Pod "pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103543ms Nov 13 05:29:53.886: INFO: Pod "pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005193589s Nov 13 05:29:55.889: INFO: Pod "pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00831286s STEP: Saw pod success Nov 13 05:29:55.889: INFO: Pod "pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee" satisfied condition "Succeeded or Failed" Nov 13 05:29:55.892: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee container agnhost-container: STEP: delete the pod Nov 13 05:29:55.911: INFO: Waiting for pod pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee to disappear Nov 13 05:29:55.913: INFO: Pod pod-projected-configmaps-dbbdd310-a67a-4f7d-9c52-1eb2c2a68bee no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:55.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7424" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":18,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:00.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-9044 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:29:00.129: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-attacher Nov 13 05:29:00.132: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9044 Nov 13 05:29:00.132: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9044 Nov 13 05:29:00.134: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9044 Nov 13 05:29:00.137: INFO: creating *v1.Role: csi-mock-volumes-9044-5090/external-attacher-cfg-csi-mock-volumes-9044 Nov 13 05:29:00.140: INFO: creating *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-attacher-role-cfg Nov 13 05:29:00.142: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-provisioner Nov 13 05:29:00.146: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9044 Nov 13 05:29:00.146: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9044 Nov 13 05:29:00.149: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9044 Nov 13 05:29:00.152: INFO: creating *v1.Role: csi-mock-volumes-9044-5090/external-provisioner-cfg-csi-mock-volumes-9044 Nov 13 05:29:00.154: INFO: creating *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-provisioner-role-cfg Nov 13 05:29:00.157: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-resizer Nov 13 05:29:00.160: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9044 Nov 13 05:29:00.160: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9044 Nov 13 05:29:00.162: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9044 Nov 13 05:29:00.165: INFO: creating *v1.Role: csi-mock-volumes-9044-5090/external-resizer-cfg-csi-mock-volumes-9044 Nov 13 05:29:00.168: INFO: creating *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-resizer-role-cfg Nov 13 05:29:00.171: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-snapshotter Nov 13 05:29:00.174: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9044 Nov 13 05:29:00.174: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9044 Nov 13 05:29:00.177: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9044 Nov 13 05:29:00.180: INFO: creating *v1.Role: csi-mock-volumes-9044-5090/external-snapshotter-leaderelection-csi-mock-volumes-9044 Nov 13 05:29:00.183: INFO: creating *v1.RoleBinding: csi-mock-volumes-9044-5090/external-snapshotter-leaderelection Nov 13 05:29:00.185: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-mock Nov 13 05:29:00.189: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9044 Nov 13 05:29:00.192: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9044 Nov 13 05:29:00.195: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9044 Nov 13 05:29:00.197: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9044 Nov 13 05:29:00.201: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9044 Nov 13 05:29:00.204: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9044 Nov 13 05:29:00.206: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9044 Nov 13 05:29:00.208: INFO: creating *v1.StatefulSet: csi-mock-volumes-9044-5090/csi-mockplugin Nov 13 05:29:00.213: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9044 Nov 13 05:29:00.216: INFO: creating *v1.StatefulSet: csi-mock-volumes-9044-5090/csi-mockplugin-attacher Nov 13 05:29:00.219: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9044" Nov 13 05:29:00.222: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9044 to register on node node1 STEP: Creating pod Nov 13 05:29:14.740: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:29:28.763: INFO: Deleting pod "pvc-volume-tester-kvjkl" in namespace "csi-mock-volumes-9044" Nov 13 05:29:28.769: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kvjkl" to be fully deleted STEP: Deleting pod pvc-volume-tester-kvjkl Nov 13 05:29:42.778: INFO: Deleting pod "pvc-volume-tester-kvjkl" in namespace "csi-mock-volumes-9044" STEP: Deleting claim pvc-2v9rx Nov 13 05:29:42.789: INFO: Waiting up to 2m0s for PersistentVolume pvc-edb7807d-2e80-4c76-a0a7-b8b399847231 to get deleted Nov 13 05:29:42.791: INFO: PersistentVolume pvc-edb7807d-2e80-4c76-a0a7-b8b399847231 found and phase=Bound (1.871676ms) Nov 13 05:29:44.796: INFO: PersistentVolume pvc-edb7807d-2e80-4c76-a0a7-b8b399847231 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-9044 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9044 STEP: Waiting for namespaces [csi-mock-volumes-9044] to vanish STEP: uninstalling csi mock driver Nov 13 05:29:50.810: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-attacher Nov 13 05:29:50.814: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9044 Nov 13 05:29:50.817: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9044 Nov 13 05:29:50.821: INFO: deleting *v1.Role: csi-mock-volumes-9044-5090/external-attacher-cfg-csi-mock-volumes-9044 Nov 13 05:29:50.825: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-attacher-role-cfg Nov 13 05:29:50.829: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-provisioner Nov 13 05:29:50.834: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9044 Nov 13 05:29:50.844: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9044 Nov 13 05:29:50.852: INFO: deleting *v1.Role: csi-mock-volumes-9044-5090/external-provisioner-cfg-csi-mock-volumes-9044 Nov 13 05:29:50.860: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-provisioner-role-cfg Nov 13 05:29:50.863: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-resizer Nov 13 05:29:50.866: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9044 Nov 13 05:29:50.870: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9044 Nov 13 05:29:50.873: INFO: deleting *v1.Role: csi-mock-volumes-9044-5090/external-resizer-cfg-csi-mock-volumes-9044 Nov 13 05:29:50.877: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9044-5090/csi-resizer-role-cfg Nov 13 05:29:50.880: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-snapshotter Nov 13 05:29:50.884: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9044 Nov 13 05:29:50.887: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9044 Nov 13 05:29:50.891: INFO: deleting *v1.Role: csi-mock-volumes-9044-5090/external-snapshotter-leaderelection-csi-mock-volumes-9044 Nov 13 05:29:50.895: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9044-5090/external-snapshotter-leaderelection Nov 13 05:29:50.898: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9044-5090/csi-mock Nov 13 05:29:50.903: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9044 Nov 13 05:29:50.907: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9044 Nov 13 05:29:50.911: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9044 Nov 13 05:29:50.914: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9044 Nov 13 05:29:50.917: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9044 Nov 13 05:29:50.920: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9044 Nov 13 05:29:50.923: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9044 Nov 13 05:29:50.926: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9044-5090/csi-mockplugin Nov 13 05:29:50.931: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9044 Nov 13 05:29:50.935: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9044-5090/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9044-5090 STEP: Waiting for namespaces [csi-mock-volumes-9044-5090] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:56.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:56.884 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":16,"skipped":755,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:49.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:29:49.047: INFO: The status of Pod test-hostpath-type-g2m5w is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:51.050: INFO: The status of Pod test-hostpath-type-g2m5w is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:53.052: INFO: The status of Pod test-hostpath-type-g2m5w is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:29:59.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4012" for this suite. • [SLOW TEST:10.102 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":11,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:55.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:29:56.013: INFO: The status of Pod test-hostpath-type-4q45l is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:58.017: INFO: The status of Pod test-hostpath-type-4q45l is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:00.016: INFO: The status of Pod test-hostpath-type-4q45l is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:02.019: INFO: The status of Pod test-hostpath-type-4q45l is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:04.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-3761" for this suite. • [SLOW TEST:8.079 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":19,"skipped":553,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:57.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:29:57.132: INFO: The status of Pod test-hostpath-type-vdj5r is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:59.134: INFO: The status of Pod test-hostpath-type-vdj5r is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:01.136: INFO: The status of Pod test-hostpath-type-vdj5r is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:03.136: INFO: The status of Pod test-hostpath-type-vdj5r is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:30:03.139: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-6255 PodName:test-hostpath-type-vdj5r ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:03.139: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:05.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-6255" for this suite. • [SLOW TEST:8.159 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":17,"skipped":825,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:59.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:29:59.261: INFO: The status of Pod test-hostpath-type-t7b75 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:01.265: INFO: The status of Pod test-hostpath-type-t7b75 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:03.266: INFO: The status of Pod test-hostpath-type-t7b75 is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:30:03.269: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3703 PodName:test-hostpath-type-t7b75 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:03.269: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:07.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3703" for this suite. • [SLOW TEST:8.171 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":12,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:43.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6" Nov 13 05:29:45.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6 && dd if=/dev/zero of=/tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6/file] Namespace:persistent-local-volumes-test-6856 PodName:hostexec-node1-6zhkr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:45.641: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:29:45.755: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6856 PodName:hostexec-node1-6zhkr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:29:45.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:29:45.851: INFO: Creating a PV followed by a PVC Nov 13 05:29:45.858: INFO: Waiting for PV local-pvrjmgf to bind to PVC pvc-hfpcb Nov 13 05:29:45.858: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hfpcb] to have phase Bound Nov 13 05:29:45.860: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:47.864: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:49.868: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:51.871: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:53.874: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:55.878: INFO: PersistentVolumeClaim pvc-hfpcb found but phase is Pending instead of Bound. Nov 13 05:29:57.883: INFO: PersistentVolumeClaim pvc-hfpcb found and phase=Bound (12.024688566s) Nov 13 05:29:57.883: INFO: Waiting up to 3m0s for PersistentVolume local-pvrjmgf to have phase Bound Nov 13 05:29:57.885: INFO: PersistentVolume local-pvrjmgf found and phase=Bound (2.039654ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:30:03.912: INFO: pod "pod-b9209b2f-d7fa-4d9a-ba46-5069537924a6" created on Node "node1" STEP: Writing in pod1 Nov 13 05:30:03.913: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6856 PodName:pod-b9209b2f-d7fa-4d9a-ba46-5069537924a6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:03.913: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:04.007: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000196 seconds, 89.7KB/s", err: Nov 13 05:30:04.007: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6856 PodName:pod-b9209b2f-d7fa-4d9a-ba46-5069537924a6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:04.007: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:04.086: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-b9209b2f-d7fa-4d9a-ba46-5069537924a6 in namespace persistent-local-volumes-test-6856 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:30:08.115: INFO: pod "pod-a44f2af3-581c-4a02-a35d-b5e33d5bb3a8" created on Node "node1" STEP: Reading in pod2 Nov 13 05:30:08.115: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6856 PodName:pod-a44f2af3-581c-4a02-a35d-b5e33d5bb3a8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:08.115: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:08.268: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-a44f2af3-581c-4a02-a35d-b5e33d5bb3a8 in namespace persistent-local-volumes-test-6856 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:08.274: INFO: Deleting PersistentVolumeClaim "pvc-hfpcb" Nov 13 05:30:08.277: INFO: Deleting PersistentVolume "local-pvrjmgf" Nov 13 05:30:08.281: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6856 PodName:hostexec-node1-6zhkr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:08.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6/file Nov 13 05:30:08.389: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6856 PodName:hostexec-node1-6zhkr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:08.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6 Nov 13 05:30:08.491: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bacb3f64-fa0c-4451-9d2b-35ef1c8725a6] Namespace:persistent-local-volumes-test-6856 PodName:hostexec-node1-6zhkr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:08.491: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:08.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6856" for this suite. • [SLOW TEST:25.003 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":533,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:08.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:30:08.622: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:08.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-892" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:55.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:29:55.299: INFO: The status of Pod test-hostpath-type-47q26 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:57.303: INFO: The status of Pod test-hostpath-type-47q26 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:29:59.303: INFO: The status of Pod test-hostpath-type-47q26 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:01.302: INFO: The status of Pod test-hostpath-type-47q26 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:09.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9004" for this suite. • [SLOW TEST:14.093 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":21,"skipped":566,"failed":0} [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:09.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Nov 13 05:30:09.379: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:09.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8545" for this suite. S [SKIPPING] [0.029 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:09.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:30:09.438: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:09.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-3671" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:09.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-e97e8d76-6a1e-48b8-aab1-d95b8bf2d644 STEP: Creating a pod to test consume configMaps Nov 13 05:30:09.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020" in namespace "configmap-408" to be "Succeeded or Failed" Nov 13 05:30:09.542: INFO: Pod "pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285185ms Nov 13 05:30:11.547: INFO: Pod "pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00681504s Nov 13 05:30:13.552: INFO: Pod "pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012064079s STEP: Saw pod success Nov 13 05:30:13.552: INFO: Pod "pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020" satisfied condition "Succeeded or Failed" Nov 13 05:30:13.556: INFO: Trying to get logs from node node2 pod pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020 container agnhost-container: STEP: delete the pod Nov 13 05:30:13.568: INFO: Waiting for pod pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020 to disappear Nov 13 05:30:13.570: INFO: Pod pod-configmaps-7258464f-eb7d-4d9e-aa6f-97004387b020 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:13.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-408" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":22,"skipped":601,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:08.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499" Nov 13 05:30:10.695: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499 && dd if=/dev/zero of=/tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499/file] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:10.695: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:10.899: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:10.899: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:11.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499 && chmod o+rwx /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:11.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:11.454: INFO: Creating a PV followed by a PVC Nov 13 05:30:11.460: INFO: Waiting for PV local-pv9d7rb to bind to PVC pvc-5ghbw Nov 13 05:30:11.460: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5ghbw] to have phase Bound Nov 13 05:30:11.462: INFO: PersistentVolumeClaim pvc-5ghbw found but phase is Pending instead of Bound. Nov 13 05:30:13.466: INFO: PersistentVolumeClaim pvc-5ghbw found and phase=Bound (2.006158531s) Nov 13 05:30:13.466: INFO: Waiting up to 3m0s for PersistentVolume local-pv9d7rb to have phase Bound Nov 13 05:30:13.469: INFO: PersistentVolume local-pv9d7rb found and phase=Bound (2.31144ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:30:17.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4141 exec pod-6ab57b15-9c83-4862-a361-5d53b844bb98 --namespace=persistent-local-volumes-test-4141 -- stat -c %g /mnt/volume1' Nov 13 05:30:17.751: INFO: stderr: "" Nov 13 05:30:17.751: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-6ab57b15-9c83-4862-a361-5d53b844bb98 in namespace persistent-local-volumes-test-4141 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:17.755: INFO: Deleting PersistentVolumeClaim "pvc-5ghbw" Nov 13 05:30:17.759: INFO: Deleting PersistentVolume "local-pv9d7rb" Nov 13 05:30:17.763: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:17.763: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:17.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:17.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499/file Nov 13 05:30:17.988: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:17.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499 Nov 13 05:30:18.905: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ed16ba83-cc1b-4b39-9c01-0a627ad64499] Namespace:persistent-local-volumes-test-4141 PodName:hostexec-node1-h2rzq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:18.905: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:19.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4141" for this suite. • [SLOW TEST:10.747 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":17,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:04.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:30:08.116: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047-backend && ln -s /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047-backend /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047] Namespace:persistent-local-volumes-test-4412 PodName:hostexec-node2-rkq4r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:08.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:08.323: INFO: Creating a PV followed by a PVC Nov 13 05:30:08.330: INFO: Waiting for PV local-pvwzj8f to bind to PVC pvc-pzmvf Nov 13 05:30:08.330: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pzmvf] to have phase Bound Nov 13 05:30:08.332: INFO: PersistentVolumeClaim pvc-pzmvf found but phase is Pending instead of Bound. Nov 13 05:30:10.334: INFO: PersistentVolumeClaim pvc-pzmvf found and phase=Bound (2.003900948s) Nov 13 05:30:10.334: INFO: Waiting up to 3m0s for PersistentVolume local-pvwzj8f to have phase Bound Nov 13 05:30:10.336: INFO: PersistentVolume local-pvwzj8f found and phase=Bound (1.930715ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:30:14.359: INFO: pod "pod-d6809132-60e0-47a7-8b41-bb1e056a06d4" created on Node "node2" STEP: Writing in pod1 Nov 13 05:30:14.359: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4412 PodName:pod-d6809132-60e0-47a7-8b41-bb1e056a06d4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:14.359: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:14.490: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:30:14.490: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4412 PodName:pod-d6809132-60e0-47a7-8b41-bb1e056a06d4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:14.490: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:14.668: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:30:18.691: INFO: pod "pod-752db547-d541-4b1c-9c20-dfc31181f5ea" created on Node "node2" Nov 13 05:30:18.691: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4412 PodName:pod-752db547-d541-4b1c-9c20-dfc31181f5ea ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:18.691: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:18.776: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:30:18.776: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4412 PodName:pod-752db547-d541-4b1c-9c20-dfc31181f5ea ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:18.776: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:18.889: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:30:18.889: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4412 PodName:pod-d6809132-60e0-47a7-8b41-bb1e056a06d4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:18.889: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:19.204: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d6809132-60e0-47a7-8b41-bb1e056a06d4 in namespace persistent-local-volumes-test-4412 STEP: Deleting pod2 STEP: Deleting pod pod-752db547-d541-4b1c-9c20-dfc31181f5ea in namespace persistent-local-volumes-test-4412 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:19.215: INFO: Deleting PersistentVolumeClaim "pvc-pzmvf" Nov 13 05:30:19.218: INFO: Deleting PersistentVolume "local-pvwzj8f" STEP: Removing the test directory Nov 13 05:30:19.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047 && rm -r /tmp/local-volume-test-37ec525b-bb76-459f-98dd-2bb8122f7047-backend] Namespace:persistent-local-volumes-test-4412 PodName:hostexec-node2-rkq4r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:19.222: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:19.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4412" for this suite. • [SLOW TEST:15.483 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":20,"skipped":556,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:30:09.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend && mount --bind /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend && ln -s /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3] Namespace:persistent-local-volumes-test-9422 PodName:hostexec-node2-k7wg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:09.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:09.411: INFO: Creating a PV followed by a PVC Nov 13 05:30:09.422: INFO: Waiting for PV local-pvxmb7l to bind to PVC pvc-n8lmr Nov 13 05:30:09.422: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-n8lmr] to have phase Bound Nov 13 05:30:09.427: INFO: PersistentVolumeClaim pvc-n8lmr found but phase is Pending instead of Bound. Nov 13 05:30:11.430: INFO: PersistentVolumeClaim pvc-n8lmr found but phase is Pending instead of Bound. Nov 13 05:30:13.435: INFO: PersistentVolumeClaim pvc-n8lmr found and phase=Bound (4.013501921s) Nov 13 05:30:13.435: INFO: Waiting up to 3m0s for PersistentVolume local-pvxmb7l to have phase Bound Nov 13 05:30:13.438: INFO: PersistentVolume local-pvxmb7l found and phase=Bound (2.449043ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:30:17.466: INFO: pod "pod-9afbe1b5-dc8e-4d49-bd81-af367de23822" created on Node "node2" STEP: Writing in pod1 Nov 13 05:30:17.466: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9422 PodName:pod-9afbe1b5-dc8e-4d49-bd81-af367de23822 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:17.466: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:17.547: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:30:17.547: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9422 PodName:pod-9afbe1b5-dc8e-4d49-bd81-af367de23822 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:17.547: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:17.632: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:30:23.663: INFO: pod "pod-33a0e9a9-e236-4194-a611-b8fd8c2add29" created on Node "node2" Nov 13 05:30:23.663: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9422 PodName:pod-33a0e9a9-e236-4194-a611-b8fd8c2add29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:23.663: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:23.948: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:30:23.948: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9422 PodName:pod-33a0e9a9-e236-4194-a611-b8fd8c2add29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:23.948: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:24.082: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:30:24.082: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9422 PodName:pod-9afbe1b5-dc8e-4d49-bd81-af367de23822 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:24.082: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:24.201: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-9afbe1b5-dc8e-4d49-bd81-af367de23822 in namespace persistent-local-volumes-test-9422 STEP: Deleting pod2 STEP: Deleting pod pod-33a0e9a9-e236-4194-a611-b8fd8c2add29 in namespace persistent-local-volumes-test-9422 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:24.212: INFO: Deleting PersistentVolumeClaim "pvc-n8lmr" Nov 13 05:30:24.215: INFO: Deleting PersistentVolume "local-pvxmb7l" STEP: Removing the test directory Nov 13 05:30:24.219: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3 && umount /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend && rm -r /tmp/local-volume-test-901c197f-18b4-42fa-b94f-31fae010b7f3-backend] Namespace:persistent-local-volumes-test-9422 PodName:hostexec-node2-k7wg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:24.219: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:24.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9422" for this suite. • [SLOW TEST:19.084 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":831,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:24.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:30:24.469: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:24.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2831" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:07.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36" Nov 13 05:30:11.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36 && dd if=/dev/zero of=/tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36/file] Namespace:persistent-local-volumes-test-9610 PodName:hostexec-node2-cmg79 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:11.500: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:11.712: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9610 PodName:hostexec-node2-cmg79 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:11.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:12.252: INFO: Creating a PV followed by a PVC Nov 13 05:30:12.258: INFO: Waiting for PV local-pvxqr2g to bind to PVC pvc-sqs24 Nov 13 05:30:12.258: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-sqs24] to have phase Bound Nov 13 05:30:12.260: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:14.263: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:16.267: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:18.271: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:20.278: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:22.284: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:24.287: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:26.290: INFO: PersistentVolumeClaim pvc-sqs24 found but phase is Pending instead of Bound. Nov 13 05:30:28.296: INFO: PersistentVolumeClaim pvc-sqs24 found and phase=Bound (16.038220912s) Nov 13 05:30:28.296: INFO: Waiting up to 3m0s for PersistentVolume local-pvxqr2g to have phase Bound Nov 13 05:30:28.298: INFO: PersistentVolume local-pvxqr2g found and phase=Bound (1.733667ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:30:28.302: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:28.304: INFO: Deleting PersistentVolumeClaim "pvc-sqs24" Nov 13 05:30:28.307: INFO: Deleting PersistentVolume "local-pvxqr2g" Nov 13 05:30:28.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9610 PodName:hostexec-node2-cmg79 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:28.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36/file Nov 13 05:30:28.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9610 PodName:hostexec-node2-cmg79 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:28.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36 Nov 13 05:30:28.509: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-27e5df0d-63ba-4c5a-bba1-99b5c8f20d36] Namespace:persistent-local-volumes-test-9610 PodName:hostexec-node2-cmg79 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:28.509: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:28.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9610" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [21.169 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:25:28.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-2d4bf0d2-f44f-4037-bd1e-4cd8cbaed35a STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:29.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5497" for this suite. • [SLOW TEST:300.065 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":8,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:24.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:30:24.558: INFO: The status of Pod test-hostpath-type-9dvn2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:26.562: INFO: The status of Pod test-hostpath-type-9dvn2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:28.564: INFO: The status of Pod test-hostpath-type-9dvn2 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:30:30.560: INFO: The status of Pod test-hostpath-type-9dvn2 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:36.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-5760" for this suite. • [SLOW TEST:12.095 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":19,"skipped":884,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:19.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35" Nov 13 05:30:25.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35 && dd if=/dev/zero of=/tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35/file] Namespace:persistent-local-volumes-test-1352 PodName:hostexec-node2-zmd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:25.500: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:25.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1352 PodName:hostexec-node2-zmd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:25.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:26.245: INFO: Creating a PV followed by a PVC Nov 13 05:30:26.252: INFO: Waiting for PV local-pvmjxtf to bind to PVC pvc-5nnbc Nov 13 05:30:26.252: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5nnbc] to have phase Bound Nov 13 05:30:26.254: INFO: PersistentVolumeClaim pvc-5nnbc found but phase is Pending instead of Bound. Nov 13 05:30:28.258: INFO: PersistentVolumeClaim pvc-5nnbc found and phase=Bound (2.006010452s) Nov 13 05:30:28.258: INFO: Waiting up to 3m0s for PersistentVolume local-pvmjxtf to have phase Bound Nov 13 05:30:28.261: INFO: PersistentVolume local-pvmjxtf found and phase=Bound (2.532335ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:30:34.289: INFO: pod "pod-5d5a5731-5819-4611-90f8-0b3a9a7b7403" created on Node "node2" STEP: Writing in pod1 Nov 13 05:30:34.289: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1352 PodName:pod-5d5a5731-5819-4611-90f8-0b3a9a7b7403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:34.289: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:34.387: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000177 seconds, 99.3KB/s", err: Nov 13 05:30:34.387: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1352 PodName:pod-5d5a5731-5819-4611-90f8-0b3a9a7b7403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:34.387: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:34.474: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:30:38.496: INFO: pod "pod-a336554d-ae59-41c4-93a0-0afdba3661a7" created on Node "node2" Nov 13 05:30:38.496: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1352 PodName:pod-a336554d-ae59-41c4-93a0-0afdba3661a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:38.496: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:38.580: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Nov 13 05:30:38.580: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop1 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1352 PodName:pod-a336554d-ae59-41c4-93a0-0afdba3661a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:38.580: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:38.670: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop1 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000034 seconds, 315.9KB/s", err: STEP: Reading in pod1 Nov 13 05:30:38.670: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1352 PodName:pod-5d5a5731-5819-4611-90f8-0b3a9a7b7403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:38.670: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:38.751: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop1.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5d5a5731-5819-4611-90f8-0b3a9a7b7403 in namespace persistent-local-volumes-test-1352 STEP: Deleting pod2 STEP: Deleting pod pod-a336554d-ae59-41c4-93a0-0afdba3661a7 in namespace persistent-local-volumes-test-1352 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:38.762: INFO: Deleting PersistentVolumeClaim "pvc-5nnbc" Nov 13 05:30:38.766: INFO: Deleting PersistentVolume "local-pvmjxtf" Nov 13 05:30:38.770: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1352 PodName:hostexec-node2-zmd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:38.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35/file Nov 13 05:30:38.866: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-1352 PodName:hostexec-node2-zmd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:38.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35 Nov 13 05:30:38.956: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1b07c0fa-38e7-4dab-aa85-f28ff4839b35] Namespace:persistent-local-volumes-test-1352 PodName:hostexec-node2-zmd4q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:38.956: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:39.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1352" for this suite. • [SLOW TEST:19.607 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":566,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:39.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253" Nov 13 05:30:41.179: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253" "/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253"] Namespace:persistent-local-volumes-test-6824 PodName:hostexec-node1-m7pgz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:41.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:41.285: INFO: Creating a PV followed by a PVC Nov 13 05:30:41.292: INFO: Waiting for PV local-pvdmpfg to bind to PVC pvc-lcxxt Nov 13 05:30:41.292: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lcxxt] to have phase Bound Nov 13 05:30:41.294: INFO: PersistentVolumeClaim pvc-lcxxt found but phase is Pending instead of Bound. Nov 13 05:30:43.300: INFO: PersistentVolumeClaim pvc-lcxxt found and phase=Bound (2.00728238s) Nov 13 05:30:43.300: INFO: Waiting up to 3m0s for PersistentVolume local-pvdmpfg to have phase Bound Nov 13 05:30:43.303: INFO: PersistentVolume local-pvdmpfg found and phase=Bound (3.046971ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:30:47.325: INFO: pod "pod-5de7820d-b4d2-4859-b4cf-5d4f92e57b9e" created on Node "node1" STEP: Writing in pod1 Nov 13 05:30:47.326: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6824 PodName:pod-5de7820d-b4d2-4859-b4cf-5d4f92e57b9e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:47.326: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:47.434: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:30:47.434: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6824 PodName:pod-5de7820d-b4d2-4859-b4cf-5d4f92e57b9e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:47.434: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:47.518: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5de7820d-b4d2-4859-b4cf-5d4f92e57b9e in namespace persistent-local-volumes-test-6824 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:30:51.546: INFO: pod "pod-2757fff6-f0cc-4628-974f-24ac405cbc8e" created on Node "node1" STEP: Reading in pod2 Nov 13 05:30:51.546: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6824 PodName:pod-2757fff6-f0cc-4628-974f-24ac405cbc8e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:51.546: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:51.639: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-2757fff6-f0cc-4628-974f-24ac405cbc8e in namespace persistent-local-volumes-test-6824 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:30:51.644: INFO: Deleting PersistentVolumeClaim "pvc-lcxxt" Nov 13 05:30:51.648: INFO: Deleting PersistentVolume "local-pvdmpfg" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253" Nov 13 05:30:51.652: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253"] Namespace:persistent-local-volumes-test-6824 PodName:hostexec-node1-m7pgz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:51.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:30:51.746: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-000abd34-dab8-4e88-9182-2db8f82bd253] Namespace:persistent-local-volumes-test-6824 PodName:hostexec-node1-m7pgz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:51.746: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:51.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6824" for this suite. • [SLOW TEST:12.763 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":19,"skipped":572,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:36.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Nov 13 05:30:48.686: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-4422 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-4422-glusterdptestr29zm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 13 05:30:48.691: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dj7rf] to have phase Bound Nov 13 05:30:48.694: INFO: PersistentVolumeClaim pvc-dj7rf found but phase is Pending instead of Bound. Nov 13 05:30:50.696: INFO: PersistentVolumeClaim pvc-dj7rf found and phase=Bound (2.004840118s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-4422"/"pvc-dj7rf" STEP: deleting the claim's PV "pvc-44db9231-9e58-4b92-ba2e-6e71860e1689" Nov 13 05:30:50.706: INFO: Waiting up to 20m0s for PersistentVolume pvc-44db9231-9e58-4b92-ba2e-6e71860e1689 to get deleted Nov 13 05:30:50.708: INFO: PersistentVolume pvc-44db9231-9e58-4b92-ba2e-6e71860e1689 found and phase=Bound (2.531923ms) Nov 13 05:30:55.712: INFO: PersistentVolume pvc-44db9231-9e58-4b92-ba2e-6e71860e1689 was removed Nov 13 05:30:55.712: INFO: deleting claim "volume-provisioning-4422"/"pvc-dj7rf" Nov 13 05:30:55.716: INFO: deleting storage class volume-provisioning-4422-glusterdptestr29zm [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:30:55.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4422" for this suite. • [SLOW TEST:19.092 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":20,"skipped":890,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:51.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:30:53.915: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-38caa708-9566-4fdd-aa2d-6c68c29c4ae7 && mount --bind /tmp/local-volume-test-38caa708-9566-4fdd-aa2d-6c68c29c4ae7 /tmp/local-volume-test-38caa708-9566-4fdd-aa2d-6c68c29c4ae7] Namespace:persistent-local-volumes-test-7284 PodName:hostexec-node2-whqbs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:30:53.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:30:54.012: INFO: Creating a PV followed by a PVC Nov 13 05:30:54.020: INFO: Waiting for PV local-pvwmjp4 to bind to PVC pvc-wrjcc Nov 13 05:30:54.020: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wrjcc] to have phase Bound Nov 13 05:30:54.022: INFO: PersistentVolumeClaim pvc-wrjcc found but phase is Pending instead of Bound. Nov 13 05:30:56.025: INFO: PersistentVolumeClaim pvc-wrjcc found but phase is Pending instead of Bound. Nov 13 05:30:58.030: INFO: PersistentVolumeClaim pvc-wrjcc found and phase=Bound (4.010169377s) Nov 13 05:30:58.030: INFO: Waiting up to 3m0s for PersistentVolume local-pvwmjp4 to have phase Bound Nov 13 05:30:58.032: INFO: PersistentVolume local-pvwmjp4 found and phase=Bound (2.14789ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:31:02.062: INFO: pod "pod-2ac63543-936b-4ca8-bb66-56d594d34462" created on Node "node2" STEP: Writing in pod1 Nov 13 05:31:02.062: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7284 PodName:pod-2ac63543-936b-4ca8-bb66-56d594d34462 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:02.062: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:02.146: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:31:02.146: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7284 PodName:pod-2ac63543-936b-4ca8-bb66-56d594d34462 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:02.146: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:02.225: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2ac63543-936b-4ca8-bb66-56d594d34462 in namespace persistent-local-volumes-test-7284 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:31:02.231: INFO: Deleting PersistentVolumeClaim "pvc-wrjcc" Nov 13 05:31:02.235: INFO: Deleting PersistentVolume "local-pvwmjp4" STEP: Removing the test directory Nov 13 05:31:02.240: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-38caa708-9566-4fdd-aa2d-6c68c29c4ae7 && rm -r /tmp/local-volume-test-38caa708-9566-4fdd-aa2d-6c68c29c4ae7] Namespace:persistent-local-volumes-test-7284 PodName:hostexec-node2-whqbs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:02.240: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:02.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7284" for this suite. • [SLOW TEST:10.481 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":20,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:02.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29" Nov 13 05:31:06.454: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29 && dd if=/dev/zero of=/tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29/file] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:06.454: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:06.566: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:06.566: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:06.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29 && chmod o+rwx /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:06.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:31:06.898: INFO: Creating a PV followed by a PVC Nov 13 05:31:06.906: INFO: Waiting for PV local-pvvtptn to bind to PVC pvc-7ngh4 Nov 13 05:31:06.906: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7ngh4] to have phase Bound Nov 13 05:31:06.908: INFO: PersistentVolumeClaim pvc-7ngh4 found but phase is Pending instead of Bound. Nov 13 05:31:08.912: INFO: PersistentVolumeClaim pvc-7ngh4 found but phase is Pending instead of Bound. Nov 13 05:31:10.917: INFO: PersistentVolumeClaim pvc-7ngh4 found but phase is Pending instead of Bound. Nov 13 05:31:12.921: INFO: PersistentVolumeClaim pvc-7ngh4 found and phase=Bound (6.015166283s) Nov 13 05:31:12.921: INFO: Waiting up to 3m0s for PersistentVolume local-pvvtptn to have phase Bound Nov 13 05:31:12.923: INFO: PersistentVolume local-pvvtptn found and phase=Bound (2.129957ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:31:16.951: INFO: pod "pod-40f10069-8920-4872-af5b-45aa5f50df1c" created on Node "node2" STEP: Writing in pod1 Nov 13 05:31:16.951: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5302 PodName:pod-40f10069-8920-4872-af5b-45aa5f50df1c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:16.951: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:17.294: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:31:17.294: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5302 PodName:pod-40f10069-8920-4872-af5b-45aa5f50df1c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:17.295: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:17.444: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-40f10069-8920-4872-af5b-45aa5f50df1c in namespace persistent-local-volumes-test-5302 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:31:23.470: INFO: pod "pod-fda510e6-e3a3-4b15-b912-8d6cfce5ddbe" created on Node "node2" STEP: Reading in pod2 Nov 13 05:31:23.470: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5302 PodName:pod-fda510e6-e3a3-4b15-b912-8d6cfce5ddbe ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:23.470: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:23.571: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-fda510e6-e3a3-4b15-b912-8d6cfce5ddbe in namespace persistent-local-volumes-test-5302 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:31:23.576: INFO: Deleting PersistentVolumeClaim "pvc-7ngh4" Nov 13 05:31:23.579: INFO: Deleting PersistentVolume "local-pvvtptn" Nov 13 05:31:23.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:23.584: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:23.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:23.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29/file Nov 13 05:31:23.753: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:23.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29 Nov 13 05:31:23.856: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-32df90fb-ffa3-4b78-ae6b-e8905622fa29] Namespace:persistent-local-volumes-test-5302 PodName:hostexec-node2-h89pm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:23.856: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:23.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5302" for this suite. • [SLOW TEST:21.538 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":21,"skipped":605,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:21:21.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Nov 13 05:21:21.909: INFO: Creating a PV followed by a PVC Nov 13 05:21:21.918: INFO: Waiting for PV local-pvplbn8 to bind to PVC pvc-xk2cp Nov 13 05:21:21.918: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xk2cp] to have phase Bound Nov 13 05:21:21.920: INFO: PersistentVolumeClaim pvc-xk2cp found but phase is Pending instead of Bound. Nov 13 05:21:23.922: INFO: PersistentVolumeClaim pvc-xk2cp found and phase=Bound (2.00443294s) Nov 13 05:21:23.922: INFO: Waiting up to 3m0s for PersistentVolume local-pvplbn8 to have phase Bound Nov 13 05:21:23.926: INFO: PersistentVolume local-pvplbn8 found and phase=Bound (4.085452ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Nov 13 05:31:24.011: INFO: Deleting PersistentVolumeClaim "pvc-xk2cp" Nov 13 05:31:24.015: INFO: Deleting PersistentVolume "local-pvplbn8" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:24.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3088" for this suite. • [SLOW TEST:602.144 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":6,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:24.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-791b144e-acdb-49d2-af87-771858f32d2d STEP: Creating a pod to test consume secrets Nov 13 05:31:24.129: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75" in namespace "projected-7378" to be "Succeeded or Failed" Nov 13 05:31:24.131: INFO: Pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349933ms Nov 13 05:31:26.136: INFO: Pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00705848s Nov 13 05:31:28.144: INFO: Pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015317486s Nov 13 05:31:30.150: INFO: Pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021217699s STEP: Saw pod success Nov 13 05:31:30.150: INFO: Pod "pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75" satisfied condition "Succeeded or Failed" Nov 13 05:31:30.153: INFO: Trying to get logs from node node2 pod pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75 container projected-secret-volume-test: STEP: delete the pod Nov 13 05:31:30.171: INFO: Waiting for pod pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75 to disappear Nov 13 05:31:30.173: INFO: Pod pod-projected-secrets-ed066fdf-af78-4dd4-a09e-50acd0fa3c75 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:30.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7378" for this suite. STEP: Destroying namespace "secret-namespace-8370" for this suite. • [SLOW TEST:6.110 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":7,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:19.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-3244 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:30:19.657: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-attacher Nov 13 05:30:19.660: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3244 Nov 13 05:30:19.660: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3244 Nov 13 05:30:19.663: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3244 Nov 13 05:30:19.666: INFO: creating *v1.Role: csi-mock-volumes-3244-4587/external-attacher-cfg-csi-mock-volumes-3244 Nov 13 05:30:19.668: INFO: creating *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-attacher-role-cfg Nov 13 05:30:19.671: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-provisioner Nov 13 05:30:19.673: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3244 Nov 13 05:30:19.673: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3244 Nov 13 05:30:19.676: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3244 Nov 13 05:30:19.681: INFO: creating *v1.Role: csi-mock-volumes-3244-4587/external-provisioner-cfg-csi-mock-volumes-3244 Nov 13 05:30:19.684: INFO: creating *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-provisioner-role-cfg Nov 13 05:30:19.687: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-resizer Nov 13 05:30:19.689: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3244 Nov 13 05:30:19.689: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3244 Nov 13 05:30:19.692: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3244 Nov 13 05:30:19.695: INFO: creating *v1.Role: csi-mock-volumes-3244-4587/external-resizer-cfg-csi-mock-volumes-3244 Nov 13 05:30:19.698: INFO: creating *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-resizer-role-cfg Nov 13 05:30:19.700: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-snapshotter Nov 13 05:30:19.702: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3244 Nov 13 05:30:19.702: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3244 Nov 13 05:30:19.705: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3244 Nov 13 05:30:19.707: INFO: creating *v1.Role: csi-mock-volumes-3244-4587/external-snapshotter-leaderelection-csi-mock-volumes-3244 Nov 13 05:30:19.710: INFO: creating *v1.RoleBinding: csi-mock-volumes-3244-4587/external-snapshotter-leaderelection Nov 13 05:30:19.713: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-mock Nov 13 05:30:19.716: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3244 Nov 13 05:30:19.718: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3244 Nov 13 05:30:19.722: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3244 Nov 13 05:30:19.724: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3244 Nov 13 05:30:19.727: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3244 Nov 13 05:30:19.729: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3244 Nov 13 05:30:19.732: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3244 Nov 13 05:30:19.735: INFO: creating *v1.StatefulSet: csi-mock-volumes-3244-4587/csi-mockplugin Nov 13 05:30:19.739: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3244 Nov 13 05:30:19.741: INFO: creating *v1.StatefulSet: csi-mock-volumes-3244-4587/csi-mockplugin-attacher Nov 13 05:30:19.744: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3244" Nov 13 05:30:19.746: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3244 to register on node node2 STEP: Creating pod Nov 13 05:30:36.016: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:30:36.020: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-pvn54] to have phase Bound Nov 13 05:30:36.022: INFO: PersistentVolumeClaim pvc-pvn54 found but phase is Pending instead of Bound. Nov 13 05:30:38.029: INFO: PersistentVolumeClaim pvc-pvn54 found and phase=Bound (2.008883729s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-k8h67 Nov 13 05:30:58.057: INFO: Deleting pod "pvc-volume-tester-k8h67" in namespace "csi-mock-volumes-3244" Nov 13 05:30:58.060: INFO: Wait up to 5m0s for pod "pvc-volume-tester-k8h67" to be fully deleted STEP: Deleting claim pvc-pvn54 Nov 13 05:31:04.074: INFO: Waiting up to 2m0s for PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac to get deleted Nov 13 05:31:04.075: INFO: PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac found and phase=Bound (1.610925ms) Nov 13 05:31:06.079: INFO: PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac found and phase=Released (2.005325172s) Nov 13 05:31:08.083: INFO: PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac found and phase=Released (4.008932853s) Nov 13 05:31:10.088: INFO: PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac found and phase=Released (6.013733648s) Nov 13 05:31:12.093: INFO: PersistentVolume pvc-aba61f5d-4bb4-4fe4-b816-9937d07c93ac was removed STEP: Deleting storageclass csi-mock-volumes-3244-scxs244 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3244 STEP: Waiting for namespaces [csi-mock-volumes-3244] to vanish STEP: uninstalling csi mock driver Nov 13 05:31:18.107: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-attacher Nov 13 05:31:18.112: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3244 Nov 13 05:31:18.115: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3244 Nov 13 05:31:18.119: INFO: deleting *v1.Role: csi-mock-volumes-3244-4587/external-attacher-cfg-csi-mock-volumes-3244 Nov 13 05:31:18.123: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-attacher-role-cfg Nov 13 05:31:18.127: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-provisioner Nov 13 05:31:18.132: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3244 Nov 13 05:31:18.139: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3244 Nov 13 05:31:18.148: INFO: deleting *v1.Role: csi-mock-volumes-3244-4587/external-provisioner-cfg-csi-mock-volumes-3244 Nov 13 05:31:18.156: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-provisioner-role-cfg Nov 13 05:31:18.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-resizer Nov 13 05:31:18.166: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3244 Nov 13 05:31:18.169: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3244 Nov 13 05:31:18.173: INFO: deleting *v1.Role: csi-mock-volumes-3244-4587/external-resizer-cfg-csi-mock-volumes-3244 Nov 13 05:31:18.176: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3244-4587/csi-resizer-role-cfg Nov 13 05:31:18.179: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-snapshotter Nov 13 05:31:18.183: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3244 Nov 13 05:31:18.187: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3244 Nov 13 05:31:18.191: INFO: deleting *v1.Role: csi-mock-volumes-3244-4587/external-snapshotter-leaderelection-csi-mock-volumes-3244 Nov 13 05:31:18.194: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3244-4587/external-snapshotter-leaderelection Nov 13 05:31:18.198: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3244-4587/csi-mock Nov 13 05:31:18.201: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3244 Nov 13 05:31:18.205: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3244 Nov 13 05:31:18.208: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3244 Nov 13 05:31:18.211: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3244 Nov 13 05:31:18.215: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3244 Nov 13 05:31:18.218: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3244 Nov 13 05:31:18.221: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3244 Nov 13 05:31:18.225: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3244-4587/csi-mockplugin Nov 13 05:31:18.230: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3244 Nov 13 05:31:18.233: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3244-4587/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3244-4587 STEP: Waiting for namespaces [csi-mock-volumes-3244-4587] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:46.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:86.655 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":21,"skipped":578,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-1075 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:30:13.671: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-attacher Nov 13 05:30:13.674: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1075 Nov 13 05:30:13.674: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1075 Nov 13 05:30:13.677: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1075 Nov 13 05:30:13.680: INFO: creating *v1.Role: csi-mock-volumes-1075-1861/external-attacher-cfg-csi-mock-volumes-1075 Nov 13 05:30:13.682: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-attacher-role-cfg Nov 13 05:30:13.685: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-provisioner Nov 13 05:30:13.687: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1075 Nov 13 05:30:13.687: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1075 Nov 13 05:30:13.689: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1075 Nov 13 05:30:13.692: INFO: creating *v1.Role: csi-mock-volumes-1075-1861/external-provisioner-cfg-csi-mock-volumes-1075 Nov 13 05:30:13.695: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-provisioner-role-cfg Nov 13 05:30:13.698: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-resizer Nov 13 05:30:13.700: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1075 Nov 13 05:30:13.700: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1075 Nov 13 05:30:13.703: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1075 Nov 13 05:30:13.706: INFO: creating *v1.Role: csi-mock-volumes-1075-1861/external-resizer-cfg-csi-mock-volumes-1075 Nov 13 05:30:13.708: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-resizer-role-cfg Nov 13 05:30:13.711: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-snapshotter Nov 13 05:30:13.713: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1075 Nov 13 05:30:13.713: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1075 Nov 13 05:30:13.715: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1075 Nov 13 05:30:13.718: INFO: creating *v1.Role: csi-mock-volumes-1075-1861/external-snapshotter-leaderelection-csi-mock-volumes-1075 Nov 13 05:30:13.720: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-1861/external-snapshotter-leaderelection Nov 13 05:30:13.723: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-mock Nov 13 05:30:13.725: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1075 Nov 13 05:30:13.727: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1075 Nov 13 05:30:13.730: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1075 Nov 13 05:30:13.732: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1075 Nov 13 05:30:13.735: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1075 Nov 13 05:30:13.737: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1075 Nov 13 05:30:13.740: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1075 Nov 13 05:30:13.743: INFO: creating *v1.StatefulSet: csi-mock-volumes-1075-1861/csi-mockplugin Nov 13 05:30:13.747: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1075 Nov 13 05:30:13.750: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1075" Nov 13 05:30:13.753: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1075 to register on node node1 STEP: Creating pod with fsGroup Nov 13 05:30:28.277: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:30:28.282: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jpj6h] to have phase Bound Nov 13 05:30:28.284: INFO: PersistentVolumeClaim pvc-jpj6h found but phase is Pending instead of Bound. Nov 13 05:30:30.290: INFO: PersistentVolumeClaim pvc-jpj6h found and phase=Bound (2.007803096s) Nov 13 05:30:34.311: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-1075] Namespace:csi-mock-volumes-1075 PodName:pvc-volume-tester-kbwdp ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:34.311: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:34.400: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-1075/csi-mock-volumes-1075'; sync] Namespace:csi-mock-volumes-1075 PodName:pvc-volume-tester-kbwdp ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:34.400: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:36.307: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-1075/csi-mock-volumes-1075] Namespace:csi-mock-volumes-1075 PodName:pvc-volume-tester-kbwdp ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:36.307: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:30:36.422: INFO: pod csi-mock-volumes-1075/pvc-volume-tester-kbwdp exec for cmd ls -l /mnt/test/csi-mock-volumes-1075/csi-mock-volumes-1075, stdout: -rw-r--r-- 1 root root 13 Nov 13 05:30 /mnt/test/csi-mock-volumes-1075/csi-mock-volumes-1075, stderr: Nov 13 05:30:36.422: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-1075] Namespace:csi-mock-volumes-1075 PodName:pvc-volume-tester-kbwdp ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:30:36.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-kbwdp Nov 13 05:30:36.505: INFO: Deleting pod "pvc-volume-tester-kbwdp" in namespace "csi-mock-volumes-1075" Nov 13 05:30:36.511: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kbwdp" to be fully deleted STEP: Deleting claim pvc-jpj6h Nov 13 05:31:12.525: INFO: Waiting up to 2m0s for PersistentVolume pvc-e4e668d6-a04e-4d5f-8a60-d388e52ea7d7 to get deleted Nov 13 05:31:12.527: INFO: PersistentVolume pvc-e4e668d6-a04e-4d5f-8a60-d388e52ea7d7 found and phase=Bound (2.318442ms) Nov 13 05:31:14.531: INFO: PersistentVolume pvc-e4e668d6-a04e-4d5f-8a60-d388e52ea7d7 was removed STEP: Deleting storageclass csi-mock-volumes-1075-scqt8bg STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1075 STEP: Waiting for namespaces [csi-mock-volumes-1075] to vanish STEP: uninstalling csi mock driver Nov 13 05:31:20.543: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-attacher Nov 13 05:31:20.549: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1075 Nov 13 05:31:20.553: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1075 Nov 13 05:31:20.556: INFO: deleting *v1.Role: csi-mock-volumes-1075-1861/external-attacher-cfg-csi-mock-volumes-1075 Nov 13 05:31:20.560: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-attacher-role-cfg Nov 13 05:31:20.563: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-provisioner Nov 13 05:31:20.567: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1075 Nov 13 05:31:20.570: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1075 Nov 13 05:31:20.573: INFO: deleting *v1.Role: csi-mock-volumes-1075-1861/external-provisioner-cfg-csi-mock-volumes-1075 Nov 13 05:31:20.576: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-provisioner-role-cfg Nov 13 05:31:20.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-resizer Nov 13 05:31:20.582: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1075 Nov 13 05:31:20.586: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1075 Nov 13 05:31:20.589: INFO: deleting *v1.Role: csi-mock-volumes-1075-1861/external-resizer-cfg-csi-mock-volumes-1075 Nov 13 05:31:20.592: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-1861/csi-resizer-role-cfg Nov 13 05:31:20.595: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-snapshotter Nov 13 05:31:20.599: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1075 Nov 13 05:31:20.602: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1075 Nov 13 05:31:20.606: INFO: deleting *v1.Role: csi-mock-volumes-1075-1861/external-snapshotter-leaderelection-csi-mock-volumes-1075 Nov 13 05:31:20.610: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-1861/external-snapshotter-leaderelection Nov 13 05:31:20.613: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-1861/csi-mock Nov 13 05:31:20.617: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1075 Nov 13 05:31:20.620: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1075 Nov 13 05:31:20.623: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1075 Nov 13 05:31:20.629: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1075 Nov 13 05:31:20.632: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1075 Nov 13 05:31:20.637: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1075 Nov 13 05:31:20.640: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1075 Nov 13 05:31:20.644: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1075-1861/csi-mockplugin Nov 13 05:31:20.647: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1075 STEP: deleting the driver namespace: csi-mock-volumes-1075-1861 STEP: Waiting for namespaces [csi-mock-volumes-1075-1861] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:95.064 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":23,"skipped":610,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:48.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:31:48.730: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:48.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3304" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:31:48.743: INFO: AfterEach: Cleaning up test resources Nov 13 05:31:48.743: INFO: pvc is nil Nov 13 05:31:48.743: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:46.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:31:46.302: INFO: The status of Pod test-hostpath-type-bx52w is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:31:48.306: INFO: The status of Pod test-hostpath-type-bx52w is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:31:50.307: INFO: The status of Pod test-hostpath-type-bx52w is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:31:50.309: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-3815 PodName:test-hostpath-type-bx52w ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:50.310: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:52.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-3815" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":22,"skipped":581,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:48.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:31:48.789: INFO: The status of Pod test-hostpath-type-jbmfn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:31:50.793: INFO: The status of Pod test-hostpath-type-jbmfn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:31:52.793: INFO: The status of Pod test-hostpath-type-jbmfn is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:31:52.796: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-5527 PodName:test-hostpath-type-jbmfn ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:52.796: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:31:54.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-5527" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":24,"skipped":625,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:52.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:31:54.529: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend && mount --bind /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend && ln -s /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7] Namespace:persistent-local-volumes-test-6100 PodName:hostexec-node1-n47jx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:54.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:31:54.619: INFO: Creating a PV followed by a PVC Nov 13 05:31:54.626: INFO: Waiting for PV local-pvmvl5d to bind to PVC pvc-fmhqh Nov 13 05:31:54.626: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fmhqh] to have phase Bound Nov 13 05:31:54.628: INFO: PersistentVolumeClaim pvc-fmhqh found but phase is Pending instead of Bound. Nov 13 05:31:56.634: INFO: PersistentVolumeClaim pvc-fmhqh found and phase=Bound (2.007173399s) Nov 13 05:31:56.634: INFO: Waiting up to 3m0s for PersistentVolume local-pvmvl5d to have phase Bound Nov 13 05:31:56.636: INFO: PersistentVolume local-pvmvl5d found and phase=Bound (1.969439ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:32:02.666: INFO: pod "pod-dafca69c-9b21-4084-ae8a-039557cb969f" created on Node "node1" STEP: Writing in pod1 Nov 13 05:32:02.666: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6100 PodName:pod-dafca69c-9b21-4084-ae8a-039557cb969f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:02.666: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:02.757: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:32:02.757: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6100 PodName:pod-dafca69c-9b21-4084-ae8a-039557cb969f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:02.757: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:02.835: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:32:02.836: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6100 PodName:pod-dafca69c-9b21-4084-ae8a-039557cb969f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:02.836: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:02.916: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-dafca69c-9b21-4084-ae8a-039557cb969f in namespace persistent-local-volumes-test-6100 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:32:02.920: INFO: Deleting PersistentVolumeClaim "pvc-fmhqh" Nov 13 05:32:02.924: INFO: Deleting PersistentVolume "local-pvmvl5d" STEP: Removing the test directory Nov 13 05:32:02.927: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7 && umount /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend && rm -r /tmp/local-volume-test-1e228e11-c68c-4651-93a1-f2ac441082d7-backend] Namespace:persistent-local-volumes-test-6100 PodName:hostexec-node1-n47jx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:02.927: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6100" for this suite. • [SLOW TEST:10.562 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":23,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:03.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:32:03.135: INFO: Waiting up to 5m0s for pod "pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b" in namespace "emptydir-5927" to be "Succeeded or Failed" Nov 13 05:32:03.139: INFO: Pod "pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670384ms Nov 13 05:32:05.142: INFO: Pod "pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006669207s Nov 13 05:32:07.146: INFO: Pod "pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010904471s STEP: Saw pod success Nov 13 05:32:07.146: INFO: Pod "pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b" satisfied condition "Succeeded or Failed" Nov 13 05:32:07.149: INFO: Trying to get logs from node node2 pod pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b container test-container: STEP: delete the pod Nov 13 05:32:07.175: INFO: Waiting for pod pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b to disappear Nov 13 05:32:07.177: INFO: Pod pod-73dfb0b3-3ec4-436b-91e0-324b4f5a427b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:07.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5927" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":24,"skipped":617,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:30.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:31:34.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4fda78e4-9bcb-4b69-aec0-72982163f80e] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:34.291: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:34.792: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f9199ea9-a1d6-424a-bdac-b8ce22598959] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:34.792: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:34.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-474533a2-97b9-4d3b-a20f-fb4126f90d90] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:34.943: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:35.314: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-72ee7945-67ee-4d5a-a5ec-88737b908920] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:35.314: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:35.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ecdd6424-ee51-45e8-a23f-c08b45688d20] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:35.456: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:35.562: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0f56f3e1-a367-4c13-8f52-d2a791c7b545] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:35.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:31:35.769: INFO: Creating a PV followed by a PVC Nov 13 05:31:35.775: INFO: Creating a PV followed by a PVC Nov 13 05:31:35.781: INFO: Creating a PV followed by a PVC Nov 13 05:31:35.786: INFO: Creating a PV followed by a PVC Nov 13 05:31:35.791: INFO: Creating a PV followed by a PVC Nov 13 05:31:35.797: INFO: Creating a PV followed by a PVC Nov 13 05:31:45.847: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:31:47.866: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-04108325-224f-4e93-8cba-f5e881b3212f] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:47.867: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:47.952: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a967272f-42fd-45df-8e11-8151a9814233] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:47.952: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:48.028: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4f491fed-bc94-4cda-ac30-8f9317d09911] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:48.028: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:48.106: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d18edc38-26df-44cd-962f-0751fedff20b] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:48.107: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:48.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b6703e01-2e07-4422-8578-3fe22a917447] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:48.199: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:48.288: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f597f279-2cda-45f7-a545-5196a4a40433] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:48.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:31:48.390: INFO: Creating a PV followed by a PVC Nov 13 05:31:48.397: INFO: Creating a PV followed by a PVC Nov 13 05:31:48.402: INFO: Creating a PV followed by a PVC Nov 13 05:31:48.408: INFO: Creating a PV followed by a PVC Nov 13 05:31:48.414: INFO: Creating a PV followed by a PVC Nov 13 05:31:48.419: INFO: Creating a PV followed by a PVC Nov 13 05:31:58.468: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Nov 13 05:31:58.476: INFO: Found 0 stateful pods, waiting for 3 Nov 13 05:32:08.481: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:32:08.481: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:32:08.481: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 13 05:32:18.481: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:32:18.481: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:32:18.481: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:32:18.484: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 13 05:32:18.487: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.420083ms) Nov 13 05:32:18.487: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Nov 13 05:32:18.489: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (2.018634ms) Nov 13 05:32:18.489: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 13 05:32:18.491: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.687098ms) Nov 13 05:32:18.492: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Nov 13 05:32:18.495: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (3.262759ms) Nov 13 05:32:18.495: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 13 05:32:18.498: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.700299ms) Nov 13 05:32:18.498: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Nov 13 05:32:18.499: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (1.837432ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:32:18.500: INFO: Deleting PersistentVolumeClaim "pvc-rhprx" Nov 13 05:32:18.504: INFO: Deleting PersistentVolume "local-pvj5jpx" STEP: Cleaning up PVC and PV Nov 13 05:32:18.508: INFO: Deleting PersistentVolumeClaim "pvc-kqssv" Nov 13 05:32:18.512: INFO: Deleting PersistentVolume "local-pvf49dt" STEP: Cleaning up PVC and PV Nov 13 05:32:18.515: INFO: Deleting PersistentVolumeClaim "pvc-k7mwh" Nov 13 05:32:18.518: INFO: Deleting PersistentVolume "local-pvjcqnl" STEP: Cleaning up PVC and PV Nov 13 05:32:18.521: INFO: Deleting PersistentVolumeClaim "pvc-7pfbj" Nov 13 05:32:18.525: INFO: Deleting PersistentVolume "local-pv5ch8c" STEP: Cleaning up PVC and PV Nov 13 05:32:18.529: INFO: Deleting PersistentVolumeClaim "pvc-grg2z" Nov 13 05:32:18.532: INFO: Deleting PersistentVolume "local-pvbrfxs" STEP: Cleaning up PVC and PV Nov 13 05:32:18.535: INFO: Deleting PersistentVolumeClaim "pvc-95jfg" Nov 13 05:32:18.539: INFO: Deleting PersistentVolume "local-pv97lnl" STEP: Removing the test directory Nov 13 05:32:18.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4fda78e4-9bcb-4b69-aec0-72982163f80e] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:18.637: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f9199ea9-a1d6-424a-bdac-b8ce22598959] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:18.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-474533a2-97b9-4d3b-a20f-fb4126f90d90] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:18.823: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-72ee7945-67ee-4d5a-a5ec-88737b908920] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:18.909: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ecdd6424-ee51-45e8-a23f-c08b45688d20] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:18.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0f56f3e1-a367-4c13-8f52-d2a791c7b545] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node1-whdpv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:18.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:32:19.068: INFO: Deleting PersistentVolumeClaim "pvc-h4k7n" Nov 13 05:32:19.072: INFO: Deleting PersistentVolume "local-pv7dj6k" STEP: Cleaning up PVC and PV Nov 13 05:32:19.078: INFO: Deleting PersistentVolumeClaim "pvc-fltxm" Nov 13 05:32:19.082: INFO: Deleting PersistentVolume "local-pvz664g" STEP: Cleaning up PVC and PV Nov 13 05:32:19.086: INFO: Deleting PersistentVolumeClaim "pvc-2rkcc" Nov 13 05:32:19.089: INFO: Deleting PersistentVolume "local-pvm2r7z" STEP: Cleaning up PVC and PV Nov 13 05:32:19.093: INFO: Deleting PersistentVolumeClaim "pvc-h8c9l" Nov 13 05:32:19.096: INFO: Deleting PersistentVolume "local-pv7lz4p" STEP: Cleaning up PVC and PV Nov 13 05:32:19.100: INFO: Deleting PersistentVolumeClaim "pvc-tsmtw" Nov 13 05:32:19.104: INFO: Deleting PersistentVolume "local-pvzdhbx" STEP: Cleaning up PVC and PV Nov 13 05:32:19.108: INFO: Deleting PersistentVolumeClaim "pvc-sz7kq" Nov 13 05:32:19.112: INFO: Deleting PersistentVolume "local-pvh6jvl" STEP: Removing the test directory Nov 13 05:32:19.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-04108325-224f-4e93-8cba-f5e881b3212f] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:19.213: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a967272f-42fd-45df-8e11-8151a9814233] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:19.309: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f491fed-bc94-4cda-ac30-8f9317d09911] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:19.400: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d18edc38-26df-44cd-962f-0751fedff20b] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:19.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b6703e01-2e07-4422-8578-3fe22a917447] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:32:19.580: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f597f279-2cda-45f7-a545-5196a4a40433] Namespace:persistent-local-volumes-test-6709 PodName:hostexec-node2-4cjxd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.580: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:19.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6709" for this suite. • [SLOW TEST:49.472 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":8,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:07.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84" Nov 13 05:32:09.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84 && dd if=/dev/zero of=/tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84/file] Namespace:persistent-local-volumes-test-6789 PodName:hostexec-node1-mkrb5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:09.277: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:09.403: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6789 PodName:hostexec-node1-mkrb5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:09.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:09.541: INFO: Creating a PV followed by a PVC Nov 13 05:32:09.548: INFO: Waiting for PV local-pvrwdh9 to bind to PVC pvc-2g57f Nov 13 05:32:09.548: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2g57f] to have phase Bound Nov 13 05:32:09.550: INFO: PersistentVolumeClaim pvc-2g57f found but phase is Pending instead of Bound. Nov 13 05:32:11.553: INFO: PersistentVolumeClaim pvc-2g57f found and phase=Bound (2.004994798s) Nov 13 05:32:11.553: INFO: Waiting up to 3m0s for PersistentVolume local-pvrwdh9 to have phase Bound Nov 13 05:32:11.555: INFO: PersistentVolume local-pvrwdh9 found and phase=Bound (1.930154ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:32:15.580: INFO: pod "pod-98bfb108-bc35-4e89-85e5-3f31356ecd0f" created on Node "node1" STEP: Writing in pod1 Nov 13 05:32:15.580: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6789 PodName:pod-98bfb108-bc35-4e89-85e5-3f31356ecd0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:15.580: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:15.672: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:32:15.672: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6789 PodName:pod-98bfb108-bc35-4e89-85e5-3f31356ecd0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:15.672: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:15.755: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-98bfb108-bc35-4e89-85e5-3f31356ecd0f in namespace persistent-local-volumes-test-6789 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:32:19.782: INFO: pod "pod-af4989ff-61a6-4c24-a795-12e673261b9d" created on Node "node1" STEP: Reading in pod2 Nov 13 05:32:19.782: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6789 PodName:pod-af4989ff-61a6-4c24-a795-12e673261b9d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:19.782: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:19.871: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-af4989ff-61a6-4c24-a795-12e673261b9d in namespace persistent-local-volumes-test-6789 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:32:19.875: INFO: Deleting PersistentVolumeClaim "pvc-2g57f" Nov 13 05:32:19.879: INFO: Deleting PersistentVolume "local-pvrwdh9" Nov 13 05:32:19.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6789 PodName:hostexec-node1-mkrb5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84/file Nov 13 05:32:19.994: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6789 PodName:hostexec-node1-mkrb5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:19.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84 Nov 13 05:32:20.083: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6a1ec3c1-8abc-480a-a84b-0260107a6e84] Namespace:persistent-local-volumes-test-6789 PodName:hostexec-node1-mkrb5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:20.083: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:20.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6789" for this suite. • [SLOW TEST:12.954 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":25,"skipped":636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:25.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:25.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-407" for this suite. • [SLOW TEST:300.055 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":10,"skipped":452,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:20.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 13 05:32:22.300: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8310fed9-63e7-47f3-bc49-b31e86b136ff] Namespace:persistent-local-volumes-test-3853 PodName:hostexec-node1-dsc4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:22.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:22.388: INFO: Creating a PV followed by a PVC Nov 13 05:32:22.395: INFO: Waiting for PV local-pv7qfxr to bind to PVC pvc-hgtp5 Nov 13 05:32:22.396: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hgtp5] to have phase Bound Nov 13 05:32:22.398: INFO: PersistentVolumeClaim pvc-hgtp5 found but phase is Pending instead of Bound. Nov 13 05:32:24.403: INFO: PersistentVolumeClaim pvc-hgtp5 found and phase=Bound (2.007290579s) Nov 13 05:32:24.403: INFO: Waiting up to 3m0s for PersistentVolume local-pv7qfxr to have phase Bound Nov 13 05:32:24.406: INFO: PersistentVolume local-pv7qfxr found and phase=Bound (2.876415ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 13 05:32:24.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1e42535c-07d5-4ef4-bb80-ba34b713527f] Namespace:persistent-local-volumes-test-3853 PodName:hostexec-node1-dsc4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:24.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:24.519: INFO: Creating a PV followed by a PVC Nov 13 05:32:24.527: INFO: Waiting for PV local-pv79qsr to bind to PVC pvc-56nwm Nov 13 05:32:24.527: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-56nwm] to have phase Bound Nov 13 05:32:24.529: INFO: PersistentVolumeClaim pvc-56nwm found but phase is Pending instead of Bound. Nov 13 05:32:26.533: INFO: PersistentVolumeClaim pvc-56nwm found and phase=Bound (2.005692073s) Nov 13 05:32:26.533: INFO: Waiting up to 3m0s for PersistentVolume local-pv79qsr to have phase Bound Nov 13 05:32:26.536: INFO: PersistentVolume local-pv79qsr found and phase=Bound (3.087644ms) Nov 13 05:32:26.551: INFO: Waiting up to 5m0s for pod "pod-9732bc61-4aac-44e6-b311-25a2d55dacd2" in namespace "persistent-local-volumes-test-3853" to be "Unschedulable" Nov 13 05:32:26.554: INFO: Pod "pod-9732bc61-4aac-44e6-b311-25a2d55dacd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393965ms Nov 13 05:32:28.557: INFO: Pod "pod-9732bc61-4aac-44e6-b311-25a2d55dacd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005953217s Nov 13 05:32:28.557: INFO: Pod "pod-9732bc61-4aac-44e6-b311-25a2d55dacd2" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 13 05:32:28.557: INFO: Deleting PersistentVolumeClaim "pvc-hgtp5" Nov 13 05:32:28.562: INFO: Deleting PersistentVolume "local-pv7qfxr" STEP: Removing the test directory Nov 13 05:32:28.566: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8310fed9-63e7-47f3-bc49-b31e86b136ff] Namespace:persistent-local-volumes-test-3853 PodName:hostexec-node1-dsc4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:28.566: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:28.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3853" for this suite. • [SLOW TEST:8.421 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":26,"skipped":671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:55.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-8180 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:30:55.977: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-attacher Nov 13 05:30:55.980: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8180 Nov 13 05:30:55.981: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8180 Nov 13 05:30:55.983: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8180 Nov 13 05:30:55.987: INFO: creating *v1.Role: csi-mock-volumes-8180-8622/external-attacher-cfg-csi-mock-volumes-8180 Nov 13 05:30:55.989: INFO: creating *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-attacher-role-cfg Nov 13 05:30:55.993: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-provisioner Nov 13 05:30:55.995: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8180 Nov 13 05:30:55.995: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8180 Nov 13 05:30:55.999: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8180 Nov 13 05:30:56.002: INFO: creating *v1.Role: csi-mock-volumes-8180-8622/external-provisioner-cfg-csi-mock-volumes-8180 Nov 13 05:30:56.005: INFO: creating *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-provisioner-role-cfg Nov 13 05:30:56.008: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-resizer Nov 13 05:30:56.010: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8180 Nov 13 05:30:56.010: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8180 Nov 13 05:30:56.012: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8180 Nov 13 05:30:56.015: INFO: creating *v1.Role: csi-mock-volumes-8180-8622/external-resizer-cfg-csi-mock-volumes-8180 Nov 13 05:30:56.017: INFO: creating *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-resizer-role-cfg Nov 13 05:30:56.020: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-snapshotter Nov 13 05:30:56.023: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8180 Nov 13 05:30:56.023: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8180 Nov 13 05:30:56.026: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8180 Nov 13 05:30:56.028: INFO: creating *v1.Role: csi-mock-volumes-8180-8622/external-snapshotter-leaderelection-csi-mock-volumes-8180 Nov 13 05:30:56.031: INFO: creating *v1.RoleBinding: csi-mock-volumes-8180-8622/external-snapshotter-leaderelection Nov 13 05:30:56.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-mock Nov 13 05:30:56.035: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8180 Nov 13 05:30:56.038: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8180 Nov 13 05:30:56.040: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8180 Nov 13 05:30:56.043: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8180 Nov 13 05:30:56.045: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8180 Nov 13 05:30:56.047: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8180 Nov 13 05:30:56.050: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8180 Nov 13 05:30:56.052: INFO: creating *v1.StatefulSet: csi-mock-volumes-8180-8622/csi-mockplugin Nov 13 05:30:56.057: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8180 Nov 13 05:30:56.060: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8180" Nov 13 05:30:56.062: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8180 to register on node node2 STEP: Creating pod with fsGroup Nov 13 05:31:10.581: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:31:10.586: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5qcjx] to have phase Bound Nov 13 05:31:10.589: INFO: PersistentVolumeClaim pvc-5qcjx found but phase is Pending instead of Bound. Nov 13 05:31:12.594: INFO: PersistentVolumeClaim pvc-5qcjx found and phase=Bound (2.007621155s) Nov 13 05:31:16.615: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-8180] Namespace:csi-mock-volumes-8180 PodName:pvc-volume-tester-l2qq6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:16.615: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:16.694: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-8180/csi-mock-volumes-8180'; sync] Namespace:csi-mock-volumes-8180 PodName:pvc-volume-tester-l2qq6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:16.694: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:19.167: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-8180/csi-mock-volumes-8180] Namespace:csi-mock-volumes-8180 PodName:pvc-volume-tester-l2qq6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:19.167: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:19.396: INFO: pod csi-mock-volumes-8180/pvc-volume-tester-l2qq6 exec for cmd ls -l /mnt/test/csi-mock-volumes-8180/csi-mock-volumes-8180, stdout: -rw-r--r-- 1 root 8578 13 Nov 13 05:31 /mnt/test/csi-mock-volumes-8180/csi-mock-volumes-8180, stderr: Nov 13 05:31:19.396: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-8180] Namespace:csi-mock-volumes-8180 PodName:pvc-volume-tester-l2qq6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:31:19.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-l2qq6 Nov 13 05:31:20.002: INFO: Deleting pod "pvc-volume-tester-l2qq6" in namespace "csi-mock-volumes-8180" Nov 13 05:31:20.006: INFO: Wait up to 5m0s for pod "pvc-volume-tester-l2qq6" to be fully deleted STEP: Deleting claim pvc-5qcjx Nov 13 05:32:02.019: INFO: Waiting up to 2m0s for PersistentVolume pvc-ad1bff46-8664-4dba-a484-5790ebeaf4b6 to get deleted Nov 13 05:32:02.021: INFO: PersistentVolume pvc-ad1bff46-8664-4dba-a484-5790ebeaf4b6 found and phase=Bound (2.125187ms) Nov 13 05:32:04.023: INFO: PersistentVolume pvc-ad1bff46-8664-4dba-a484-5790ebeaf4b6 was removed STEP: Deleting storageclass csi-mock-volumes-8180-scp75tm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8180 STEP: Waiting for namespaces [csi-mock-volumes-8180] to vanish STEP: uninstalling csi mock driver Nov 13 05:32:10.034: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-attacher Nov 13 05:32:10.038: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8180 Nov 13 05:32:10.042: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8180 Nov 13 05:32:10.045: INFO: deleting *v1.Role: csi-mock-volumes-8180-8622/external-attacher-cfg-csi-mock-volumes-8180 Nov 13 05:32:10.048: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-attacher-role-cfg Nov 13 05:32:10.053: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-provisioner Nov 13 05:32:10.056: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8180 Nov 13 05:32:10.060: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8180 Nov 13 05:32:10.063: INFO: deleting *v1.Role: csi-mock-volumes-8180-8622/external-provisioner-cfg-csi-mock-volumes-8180 Nov 13 05:32:10.066: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-provisioner-role-cfg Nov 13 05:32:10.069: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-resizer Nov 13 05:32:10.072: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8180 Nov 13 05:32:10.076: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8180 Nov 13 05:32:10.079: INFO: deleting *v1.Role: csi-mock-volumes-8180-8622/external-resizer-cfg-csi-mock-volumes-8180 Nov 13 05:32:10.082: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8180-8622/csi-resizer-role-cfg Nov 13 05:32:10.085: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-snapshotter Nov 13 05:32:10.088: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8180 Nov 13 05:32:10.091: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8180 Nov 13 05:32:10.095: INFO: deleting *v1.Role: csi-mock-volumes-8180-8622/external-snapshotter-leaderelection-csi-mock-volumes-8180 Nov 13 05:32:10.098: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8180-8622/external-snapshotter-leaderelection Nov 13 05:32:10.101: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8180-8622/csi-mock Nov 13 05:32:10.105: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8180 Nov 13 05:32:10.109: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8180 Nov 13 05:32:10.112: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8180 Nov 13 05:32:10.115: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8180 Nov 13 05:32:10.118: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8180 Nov 13 05:32:10.121: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8180 Nov 13 05:32:10.124: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8180 Nov 13 05:32:10.127: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8180-8622/csi-mockplugin Nov 13 05:32:10.130: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8180 STEP: deleting the driver namespace: csi-mock-volumes-8180-8622 STEP: Waiting for namespaces [csi-mock-volumes-8180-8622] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:38.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:102.246 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":21,"skipped":975,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:23.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-4853 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:31:24.063: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-attacher Nov 13 05:31:24.066: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4853 Nov 13 05:31:24.066: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4853 Nov 13 05:31:24.069: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4853 Nov 13 05:31:24.071: INFO: creating *v1.Role: csi-mock-volumes-4853-7077/external-attacher-cfg-csi-mock-volumes-4853 Nov 13 05:31:24.073: INFO: creating *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-attacher-role-cfg Nov 13 05:31:24.076: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-provisioner Nov 13 05:31:24.078: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4853 Nov 13 05:31:24.078: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4853 Nov 13 05:31:24.081: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4853 Nov 13 05:31:24.084: INFO: creating *v1.Role: csi-mock-volumes-4853-7077/external-provisioner-cfg-csi-mock-volumes-4853 Nov 13 05:31:24.087: INFO: creating *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-provisioner-role-cfg Nov 13 05:31:24.090: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-resizer Nov 13 05:31:24.093: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4853 Nov 13 05:31:24.093: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4853 Nov 13 05:31:24.095: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4853 Nov 13 05:31:24.098: INFO: creating *v1.Role: csi-mock-volumes-4853-7077/external-resizer-cfg-csi-mock-volumes-4853 Nov 13 05:31:24.100: INFO: creating *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-resizer-role-cfg Nov 13 05:31:24.103: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-snapshotter Nov 13 05:31:24.105: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4853 Nov 13 05:31:24.106: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4853 Nov 13 05:31:24.108: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4853 Nov 13 05:31:24.111: INFO: creating *v1.Role: csi-mock-volumes-4853-7077/external-snapshotter-leaderelection-csi-mock-volumes-4853 Nov 13 05:31:24.114: INFO: creating *v1.RoleBinding: csi-mock-volumes-4853-7077/external-snapshotter-leaderelection Nov 13 05:31:24.116: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-mock Nov 13 05:31:24.118: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4853 Nov 13 05:31:24.121: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4853 Nov 13 05:31:24.123: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4853 Nov 13 05:31:24.126: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4853 Nov 13 05:31:24.129: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4853 Nov 13 05:31:24.132: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4853 Nov 13 05:31:24.135: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4853 Nov 13 05:31:24.138: INFO: creating *v1.StatefulSet: csi-mock-volumes-4853-7077/csi-mockplugin Nov 13 05:31:24.142: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4853 Nov 13 05:31:24.144: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4853" Nov 13 05:31:24.146: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4853 to register on node node1 I1113 05:31:32.485630 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4853","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:31:32.811566 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:31:32.812985 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4853","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:31:32.814970 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:31:32.818789 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:31:33.629374 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4853"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:31:33.661: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:31:33.666: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kr9px] to have phase Bound Nov 13 05:31:33.668: INFO: PersistentVolumeClaim pvc-kr9px found but phase is Pending instead of Bound. I1113 05:31:33.675270 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982"}}},"Error":"","FullError":null} Nov 13 05:31:35.672: INFO: PersistentVolumeClaim pvc-kr9px found and phase=Bound (2.005999512s) Nov 13 05:31:35.687: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kr9px] to have phase Bound Nov 13 05:31:35.690: INFO: PersistentVolumeClaim pvc-kr9px found and phase=Bound (2.320704ms) STEP: Waiting for expected CSI calls I1113 05:31:35.851816 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:31:35.854927 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","storage.kubernetes.io/csiProvisionerIdentity":"1636781492817-8081-csi-mock-csi-mock-volumes-4853"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:31:36.455670 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:31:36.457328 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","storage.kubernetes.io/csiProvisionerIdentity":"1636781492817-8081-csi-mock-csi-mock-volumes-4853"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:31:37.467111 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:31:37.471552 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","storage.kubernetes.io/csiProvisionerIdentity":"1636781492817-8081-csi-mock-csi-mock-volumes-4853"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:31:39.488595 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:31:39.490: INFO: >>> kubeConfig: /root/.kube/config I1113 05:31:39.575851 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","storage.kubernetes.io/csiProvisionerIdentity":"1636781492817-8081-csi-mock-csi-mock-volumes-4853"}},"Response":{},"Error":"","FullError":null} I1113 05:31:39.584254 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:31:39.586: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:31:39.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Nov 13 05:31:39.761: INFO: >>> kubeConfig: /root/.kube/config I1113 05:31:39.856249 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount","target_path":"/var/lib/kubelet/pods/683994b2-4972-4f93-842c-793c98e3e08b/volumes/kubernetes.io~csi/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982","storage.kubernetes.io/csiProvisionerIdentity":"1636781492817-8081-csi-mock-csi-mock-volumes-4853"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Nov 13 05:31:43.699: INFO: Deleting pod "pvc-volume-tester-p6b6r" in namespace "csi-mock-volumes-4853" Nov 13 05:31:43.705: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p6b6r" to be fully deleted Nov 13 05:31:45.609: INFO: >>> kubeConfig: /root/.kube/config I1113 05:31:45.696342 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/683994b2-4972-4f93-842c-793c98e3e08b/volumes/kubernetes.io~csi/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/mount"},"Response":{},"Error":"","FullError":null} I1113 05:31:45.709943 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:31:45.711887 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-p6b6r Nov 13 05:31:52.713: INFO: Deleting pod "pvc-volume-tester-p6b6r" in namespace "csi-mock-volumes-4853" STEP: Deleting claim pvc-kr9px Nov 13 05:31:52.721: INFO: Waiting up to 2m0s for PersistentVolume pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982 to get deleted Nov 13 05:31:52.723: INFO: PersistentVolume pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982 found and phase=Bound (1.833718ms) I1113 05:31:52.734677 23 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:31:54.728: INFO: PersistentVolume pvc-93858eb0-4f4b-40ec-a483-0c6d0a8b7982 was removed STEP: Deleting storageclass csi-mock-volumes-4853-scxp9fv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4853 STEP: Waiting for namespaces [csi-mock-volumes-4853] to vanish STEP: uninstalling csi mock driver Nov 13 05:32:00.751: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-attacher Nov 13 05:32:00.755: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4853 Nov 13 05:32:00.758: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4853 Nov 13 05:32:00.762: INFO: deleting *v1.Role: csi-mock-volumes-4853-7077/external-attacher-cfg-csi-mock-volumes-4853 Nov 13 05:32:00.766: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-attacher-role-cfg Nov 13 05:32:00.770: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-provisioner Nov 13 05:32:00.773: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4853 Nov 13 05:32:00.777: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4853 Nov 13 05:32:00.781: INFO: deleting *v1.Role: csi-mock-volumes-4853-7077/external-provisioner-cfg-csi-mock-volumes-4853 Nov 13 05:32:00.784: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-provisioner-role-cfg Nov 13 05:32:00.789: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-resizer Nov 13 05:32:00.793: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4853 Nov 13 05:32:00.797: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4853 Nov 13 05:32:00.800: INFO: deleting *v1.Role: csi-mock-volumes-4853-7077/external-resizer-cfg-csi-mock-volumes-4853 Nov 13 05:32:00.805: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4853-7077/csi-resizer-role-cfg Nov 13 05:32:00.808: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-snapshotter Nov 13 05:32:00.811: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4853 Nov 13 05:32:00.818: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4853 Nov 13 05:32:00.822: INFO: deleting *v1.Role: csi-mock-volumes-4853-7077/external-snapshotter-leaderelection-csi-mock-volumes-4853 Nov 13 05:32:00.833: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4853-7077/external-snapshotter-leaderelection Nov 13 05:32:00.836: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4853-7077/csi-mock Nov 13 05:32:00.840: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4853 Nov 13 05:32:00.844: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4853 Nov 13 05:32:00.847: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4853 Nov 13 05:32:00.851: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4853 Nov 13 05:32:00.854: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4853 Nov 13 05:32:00.857: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4853 Nov 13 05:32:00.861: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4853 Nov 13 05:32:00.864: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4853-7077/csi-mockplugin Nov 13 05:32:00.869: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4853 STEP: deleting the driver namespace: csi-mock-volumes-4853-7077 STEP: Waiting for namespaces [csi-mock-volumes-4853-7077] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:44.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.894 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":22,"skipped":626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:25.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd" Nov 13 05:32:29.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd && dd if=/dev/zero of=/tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd/file] Namespace:persistent-local-volumes-test-6097 PodName:hostexec-node2-9sdnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:29.625: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:29.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6097 PodName:hostexec-node2-9sdnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:29.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:29.848: INFO: Creating a PV followed by a PVC Nov 13 05:32:29.855: INFO: Waiting for PV local-pv685hk to bind to PVC pvc-4r6mn Nov 13 05:32:29.855: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4r6mn] to have phase Bound Nov 13 05:32:29.858: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:31.862: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:33.864: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:35.869: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:37.874: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:39.877: INFO: PersistentVolumeClaim pvc-4r6mn found but phase is Pending instead of Bound. Nov 13 05:32:41.882: INFO: PersistentVolumeClaim pvc-4r6mn found and phase=Bound (12.027102395s) Nov 13 05:32:41.882: INFO: Waiting up to 3m0s for PersistentVolume local-pv685hk to have phase Bound Nov 13 05:32:41.885: INFO: PersistentVolume local-pv685hk found and phase=Bound (2.971319ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:32:45.914: INFO: pod "pod-3acc13df-2922-4df4-a89e-85fe56bda631" created on Node "node2" STEP: Writing in pod1 Nov 13 05:32:45.914: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6097 PodName:pod-3acc13df-2922-4df4-a89e-85fe56bda631 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:45.914: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:46.012: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:32:46.012: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6097 PodName:pod-3acc13df-2922-4df4-a89e-85fe56bda631 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:46.012: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:46.097: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:32:46.097: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6097 PodName:pod-3acc13df-2922-4df4-a89e-85fe56bda631 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:32:46.097: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:46.190: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3acc13df-2922-4df4-a89e-85fe56bda631 in namespace persistent-local-volumes-test-6097 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:32:46.195: INFO: Deleting PersistentVolumeClaim "pvc-4r6mn" Nov 13 05:32:46.199: INFO: Deleting PersistentVolume "local-pv685hk" Nov 13 05:32:46.202: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6097 PodName:hostexec-node2-9sdnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:46.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd/file Nov 13 05:32:46.299: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6097 PodName:hostexec-node2-9sdnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:46.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd Nov 13 05:32:46.392: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-403290e1-0d0d-426b-9ec4-9eeb8f0a71dd] Namespace:persistent-local-volumes-test-6097 PodName:hostexec-node2-9sdnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:46.392: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:46.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6097" for this suite. • [SLOW TEST:20.932 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:27:46.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:46.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8705" for this suite. • [SLOW TEST:300.060 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":12,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:46.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:32:46.970: INFO: The status of Pod test-hostpath-type-2vcf4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:32:48.975: INFO: The status of Pod test-hostpath-type-2vcf4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:32:50.974: INFO: The status of Pod test-hostpath-type-2vcf4 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:32:52.978: INFO: The status of Pod test-hostpath-type-2vcf4 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:32:59.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8011" for this suite. • [SLOW TEST:12.098 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":13,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:38.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8" Nov 13 05:32:42.233: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8" "/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8"] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-mk2kz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:42.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:42.425: INFO: Creating a PV followed by a PVC Nov 13 05:32:42.432: INFO: Waiting for PV local-pvfq9r8 to bind to PVC pvc-p99rt Nov 13 05:32:42.433: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-p99rt] to have phase Bound Nov 13 05:32:42.435: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:44.438: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:46.440: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:48.446: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:50.452: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:52.458: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:54.461: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:56.466: INFO: PersistentVolumeClaim pvc-p99rt found but phase is Pending instead of Bound. Nov 13 05:32:58.471: INFO: PersistentVolumeClaim pvc-p99rt found and phase=Bound (16.038726171s) Nov 13 05:32:58.471: INFO: Waiting up to 3m0s for PersistentVolume local-pvfq9r8 to have phase Bound Nov 13 05:32:58.473: INFO: PersistentVolume local-pvfq9r8 found and phase=Bound (2.011275ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:33:02.498: INFO: pod "pod-f20d563b-44e1-44a5-9a2a-5d8bdb763a3e" created on Node "node2" STEP: Writing in pod1 Nov 13 05:33:02.498: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5258 PodName:pod-f20d563b-44e1-44a5-9a2a-5d8bdb763a3e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:02.498: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:02.581: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:33:02.581: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5258 PodName:pod-f20d563b-44e1-44a5-9a2a-5d8bdb763a3e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:02.581: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:02.656: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-f20d563b-44e1-44a5-9a2a-5d8bdb763a3e in namespace persistent-local-volumes-test-5258 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:33:02.661: INFO: Deleting PersistentVolumeClaim "pvc-p99rt" Nov 13 05:33:02.665: INFO: Deleting PersistentVolume "local-pvfq9r8" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8" Nov 13 05:33:02.669: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8"] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-mk2kz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:02.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:33:02.764: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dcd1b97a-5646-430e-8f6f-53aa209675b8] Namespace:persistent-local-volumes-test-5258 PodName:hostexec-node2-mk2kz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:02.764: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:33:02.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5258" for this suite. • [SLOW TEST:24.682 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":22,"skipped":987,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:46.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d" Nov 13 05:32:50.660: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d && dd if=/dev/zero of=/tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d/file] Namespace:persistent-local-volumes-test-1053 PodName:hostexec-node2-nbvgh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:50.660: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:32:50.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1053 PodName:hostexec-node2-nbvgh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:32:50.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:32:50.866: INFO: Creating a PV followed by a PVC Nov 13 05:32:50.872: INFO: Waiting for PV local-pv5jfpv to bind to PVC pvc-5td7z Nov 13 05:32:50.872: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5td7z] to have phase Bound Nov 13 05:32:50.875: INFO: PersistentVolumeClaim pvc-5td7z found but phase is Pending instead of Bound. Nov 13 05:32:52.879: INFO: PersistentVolumeClaim pvc-5td7z found but phase is Pending instead of Bound. Nov 13 05:32:54.883: INFO: PersistentVolumeClaim pvc-5td7z found but phase is Pending instead of Bound. Nov 13 05:32:56.886: INFO: PersistentVolumeClaim pvc-5td7z found and phase=Bound (6.01416589s) Nov 13 05:32:56.886: INFO: Waiting up to 3m0s for PersistentVolume local-pv5jfpv to have phase Bound Nov 13 05:32:56.889: INFO: PersistentVolume local-pv5jfpv found and phase=Bound (2.421334ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:33:02.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1053 exec pod-67cc5c27-c652-478b-a8d9-0305ef1f369e --namespace=persistent-local-volumes-test-1053 -- stat -c %g /mnt/volume1' Nov 13 05:33:03.125: INFO: stderr: "" Nov 13 05:33:03.125: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-67cc5c27-c652-478b-a8d9-0305ef1f369e in namespace persistent-local-volumes-test-1053 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:33:03.129: INFO: Deleting PersistentVolumeClaim "pvc-5td7z" Nov 13 05:33:03.134: INFO: Deleting PersistentVolume "local-pv5jfpv" Nov 13 05:33:03.138: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1053 PodName:hostexec-node2-nbvgh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:03.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d/file Nov 13 05:33:03.223: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1053 PodName:hostexec-node2-nbvgh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:03.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d Nov 13 05:33:03.313: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-35d27f24-3dc6-4cd7-bee9-e5305c203c2d] Namespace:persistent-local-volumes-test-1053 PodName:hostexec-node2-nbvgh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:03.313: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:33:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1053" for this suite. • [SLOW TEST:16.844 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":12,"skipped":502,"failed":0} SSSSSSSSSSSSS ------------------------------ Nov 13 05:33:03.483: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:28.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:32:58.812: INFO: Deleting pod "pv-1695"/"pod-ephm-test-projected-w6zw" Nov 13 05:32:58.812: INFO: Deleting pod "pod-ephm-test-projected-w6zw" in namespace "pv-1695" Nov 13 05:32:58.818: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-w6zw" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:33:12.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1695" for this suite. • [SLOW TEST:44.058 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":27,"skipped":719,"failed":0} Nov 13 05:33:12.835: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:59.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6" Nov 13 05:33:01.211: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6 && dd if=/dev/zero of=/tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6/file] Namespace:persistent-local-volumes-test-3682 PodName:hostexec-node2-ssw6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:01.211: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:01.383: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3682 PodName:hostexec-node2-ssw6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:01.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:33:01.479: INFO: Creating a PV followed by a PVC Nov 13 05:33:01.486: INFO: Waiting for PV local-pvj9bb2 to bind to PVC pvc-brnqg Nov 13 05:33:01.486: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-brnqg] to have phase Bound Nov 13 05:33:01.487: INFO: PersistentVolumeClaim pvc-brnqg found but phase is Pending instead of Bound. Nov 13 05:33:03.493: INFO: PersistentVolumeClaim pvc-brnqg found but phase is Pending instead of Bound. Nov 13 05:33:05.496: INFO: PersistentVolumeClaim pvc-brnqg found but phase is Pending instead of Bound. Nov 13 05:33:07.503: INFO: PersistentVolumeClaim pvc-brnqg found but phase is Pending instead of Bound. Nov 13 05:33:09.507: INFO: PersistentVolumeClaim pvc-brnqg found but phase is Pending instead of Bound. Nov 13 05:33:11.510: INFO: PersistentVolumeClaim pvc-brnqg found and phase=Bound (10.024656298s) Nov 13 05:33:11.510: INFO: Waiting up to 3m0s for PersistentVolume local-pvj9bb2 to have phase Bound Nov 13 05:33:11.512: INFO: PersistentVolume local-pvj9bb2 found and phase=Bound (1.881287ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:33:15.540: INFO: pod "pod-f3744a70-e6d2-45ac-a2e8-7e247ea03167" created on Node "node2" STEP: Writing in pod1 Nov 13 05:33:15.540: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3682 PodName:pod-f3744a70-e6d2-45ac-a2e8-7e247ea03167 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:15.540: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:15.623: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:33:15.623: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3682 PodName:pod-f3744a70-e6d2-45ac-a2e8-7e247ea03167 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:15.623: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:15.732: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:33:19.754: INFO: pod "pod-9f99f643-e187-4048-b481-c3823810f389" created on Node "node2" Nov 13 05:33:19.754: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3682 PodName:pod-9f99f643-e187-4048-b481-c3823810f389 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:19.754: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:19.831: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:33:19.831: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop1 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3682 PodName:pod-9f99f643-e187-4048-b481-c3823810f389 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:19.831: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:19.903: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop1 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:33:19.903: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3682 PodName:pod-f3744a70-e6d2-45ac-a2e8-7e247ea03167 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:33:19.903: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:19.978: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop1", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f3744a70-e6d2-45ac-a2e8-7e247ea03167 in namespace persistent-local-volumes-test-3682 STEP: Deleting pod2 STEP: Deleting pod pod-9f99f643-e187-4048-b481-c3823810f389 in namespace persistent-local-volumes-test-3682 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:33:19.988: INFO: Deleting PersistentVolumeClaim "pvc-brnqg" Nov 13 05:33:19.992: INFO: Deleting PersistentVolume "local-pvj9bb2" Nov 13 05:33:19.996: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3682 PodName:hostexec-node2-ssw6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:19.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6/file Nov 13 05:33:20.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-3682 PodName:hostexec-node2-ssw6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:20.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6 Nov 13 05:33:20.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8b37894f-069e-4f6d-8129-103fbeddcef6] Namespace:persistent-local-volumes-test-3682 PodName:hostexec-node2-ssw6r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:33:20.199: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:33:20.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3682" for this suite. • [SLOW TEST:21.142 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":512,"failed":0} Nov 13 05:33:20.303: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:44.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-5352 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:32:45.062: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-attacher Nov 13 05:32:45.066: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5352 Nov 13 05:32:45.066: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5352 Nov 13 05:32:45.068: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5352 Nov 13 05:32:45.074: INFO: creating *v1.Role: csi-mock-volumes-5352-6794/external-attacher-cfg-csi-mock-volumes-5352 Nov 13 05:32:45.077: INFO: creating *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-attacher-role-cfg Nov 13 05:32:45.080: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-provisioner Nov 13 05:32:45.083: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5352 Nov 13 05:32:45.083: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5352 Nov 13 05:32:45.085: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5352 Nov 13 05:32:45.088: INFO: creating *v1.Role: csi-mock-volumes-5352-6794/external-provisioner-cfg-csi-mock-volumes-5352 Nov 13 05:32:45.091: INFO: creating *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-provisioner-role-cfg Nov 13 05:32:45.094: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-resizer Nov 13 05:32:45.096: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5352 Nov 13 05:32:45.096: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5352 Nov 13 05:32:45.099: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5352 Nov 13 05:32:45.101: INFO: creating *v1.Role: csi-mock-volumes-5352-6794/external-resizer-cfg-csi-mock-volumes-5352 Nov 13 05:32:45.104: INFO: creating *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-resizer-role-cfg Nov 13 05:32:45.107: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-snapshotter Nov 13 05:32:45.109: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5352 Nov 13 05:32:45.109: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5352 Nov 13 05:32:45.112: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5352 Nov 13 05:32:45.116: INFO: creating *v1.Role: csi-mock-volumes-5352-6794/external-snapshotter-leaderelection-csi-mock-volumes-5352 Nov 13 05:32:45.118: INFO: creating *v1.RoleBinding: csi-mock-volumes-5352-6794/external-snapshotter-leaderelection Nov 13 05:32:45.121: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-mock Nov 13 05:32:45.123: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5352 Nov 13 05:32:45.125: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5352 Nov 13 05:32:45.128: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5352 Nov 13 05:32:45.131: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5352 Nov 13 05:32:45.133: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5352 Nov 13 05:32:45.136: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5352 Nov 13 05:32:45.138: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5352 Nov 13 05:32:45.141: INFO: creating *v1.StatefulSet: csi-mock-volumes-5352-6794/csi-mockplugin Nov 13 05:32:45.145: INFO: creating *v1.StatefulSet: csi-mock-volumes-5352-6794/csi-mockplugin-attacher Nov 13 05:32:45.148: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5352 to register on node node1 STEP: Creating pod Nov 13 05:32:54.668: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:32:54.674: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-pkhwr] to have phase Bound Nov 13 05:32:54.676: INFO: PersistentVolumeClaim pvc-pkhwr found but phase is Pending instead of Bound. Nov 13 05:32:56.680: INFO: PersistentVolumeClaim pvc-pkhwr found and phase=Bound (2.005385371s) STEP: Deleting the previously created pod Nov 13 05:33:04.701: INFO: Deleting pod "pvc-volume-tester-c7sw7" in namespace "csi-mock-volumes-5352" Nov 13 05:33:04.705: INFO: Wait up to 5m0s for pod "pvc-volume-tester-c7sw7" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:33:12.728: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/28d5224b-fc6c-4956-8f15-8a520748d66a/volumes/kubernetes.io~csi/pvc-f10360d2-0100-458a-b872-2f287aa98ab4/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-c7sw7 Nov 13 05:33:12.728: INFO: Deleting pod "pvc-volume-tester-c7sw7" in namespace "csi-mock-volumes-5352" STEP: Deleting claim pvc-pkhwr Nov 13 05:33:12.737: INFO: Waiting up to 2m0s for PersistentVolume pvc-f10360d2-0100-458a-b872-2f287aa98ab4 to get deleted Nov 13 05:33:12.739: INFO: PersistentVolume pvc-f10360d2-0100-458a-b872-2f287aa98ab4 found and phase=Bound (2.519951ms) Nov 13 05:33:14.743: INFO: PersistentVolume pvc-f10360d2-0100-458a-b872-2f287aa98ab4 was removed STEP: Deleting storageclass csi-mock-volumes-5352-scjkbn8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5352 STEP: Waiting for namespaces [csi-mock-volumes-5352] to vanish STEP: uninstalling csi mock driver Nov 13 05:33:20.754: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-attacher Nov 13 05:33:20.758: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5352 Nov 13 05:33:20.762: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5352 Nov 13 05:33:20.766: INFO: deleting *v1.Role: csi-mock-volumes-5352-6794/external-attacher-cfg-csi-mock-volumes-5352 Nov 13 05:33:20.770: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-attacher-role-cfg Nov 13 05:33:20.773: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-provisioner Nov 13 05:33:20.777: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5352 Nov 13 05:33:20.781: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5352 Nov 13 05:33:20.788: INFO: deleting *v1.Role: csi-mock-volumes-5352-6794/external-provisioner-cfg-csi-mock-volumes-5352 Nov 13 05:33:20.798: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-provisioner-role-cfg Nov 13 05:33:20.805: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-resizer Nov 13 05:33:20.809: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5352 Nov 13 05:33:20.812: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5352 Nov 13 05:33:20.816: INFO: deleting *v1.Role: csi-mock-volumes-5352-6794/external-resizer-cfg-csi-mock-volumes-5352 Nov 13 05:33:20.819: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5352-6794/csi-resizer-role-cfg Nov 13 05:33:20.823: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-snapshotter Nov 13 05:33:20.826: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5352 Nov 13 05:33:20.830: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5352 Nov 13 05:33:20.833: INFO: deleting *v1.Role: csi-mock-volumes-5352-6794/external-snapshotter-leaderelection-csi-mock-volumes-5352 Nov 13 05:33:20.836: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5352-6794/external-snapshotter-leaderelection Nov 13 05:33:20.840: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5352-6794/csi-mock Nov 13 05:33:20.843: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5352 Nov 13 05:33:20.847: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5352 Nov 13 05:33:20.850: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5352 Nov 13 05:33:20.854: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5352 Nov 13 05:33:20.857: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5352 Nov 13 05:33:20.860: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5352 Nov 13 05:33:20.864: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5352 Nov 13 05:33:20.868: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5352-6794/csi-mockplugin Nov 13 05:33:20.871: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5352-6794/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5352-6794 STEP: Waiting for namespaces [csi-mock-volumes-5352-6794] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:33:48.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.890 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":23,"skipped":679,"failed":0} Nov 13 05:33:48.890: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:32:19.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 STEP: Building a driver namespace object, basename csi-mock-volumes-9843 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:32:19.860: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-attacher Nov 13 05:32:19.863: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9843 Nov 13 05:32:19.863: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9843 Nov 13 05:32:19.866: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9843 Nov 13 05:32:19.869: INFO: creating *v1.Role: csi-mock-volumes-9843-1385/external-attacher-cfg-csi-mock-volumes-9843 Nov 13 05:32:19.872: INFO: creating *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-attacher-role-cfg Nov 13 05:32:19.875: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-provisioner Nov 13 05:32:19.877: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9843 Nov 13 05:32:19.878: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9843 Nov 13 05:32:19.880: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9843 Nov 13 05:32:19.884: INFO: creating *v1.Role: csi-mock-volumes-9843-1385/external-provisioner-cfg-csi-mock-volumes-9843 Nov 13 05:32:19.886: INFO: creating *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-provisioner-role-cfg Nov 13 05:32:19.889: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-resizer Nov 13 05:32:19.891: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9843 Nov 13 05:32:19.891: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9843 Nov 13 05:32:19.894: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9843 Nov 13 05:32:19.896: INFO: creating *v1.Role: csi-mock-volumes-9843-1385/external-resizer-cfg-csi-mock-volumes-9843 Nov 13 05:32:19.899: INFO: creating *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-resizer-role-cfg Nov 13 05:32:19.901: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-snapshotter Nov 13 05:32:19.906: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9843 Nov 13 05:32:19.906: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9843 Nov 13 05:32:19.908: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9843 Nov 13 05:32:19.911: INFO: creating *v1.Role: csi-mock-volumes-9843-1385/external-snapshotter-leaderelection-csi-mock-volumes-9843 Nov 13 05:32:19.914: INFO: creating *v1.RoleBinding: csi-mock-volumes-9843-1385/external-snapshotter-leaderelection Nov 13 05:32:19.916: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-mock Nov 13 05:32:19.920: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9843 Nov 13 05:32:19.922: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9843 Nov 13 05:32:19.926: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9843 Nov 13 05:32:19.928: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9843 Nov 13 05:32:19.931: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9843 Nov 13 05:32:19.934: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9843 Nov 13 05:32:19.936: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9843 Nov 13 05:32:19.939: INFO: creating *v1.StatefulSet: csi-mock-volumes-9843-1385/csi-mockplugin Nov 13 05:32:19.944: INFO: creating *v1.StatefulSet: csi-mock-volumes-9843-1385/csi-mockplugin-attacher Nov 13 05:32:19.947: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9843 to register on node node2 STEP: Creating pod Nov 13 05:32:29.467: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:32:29.472: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wksvs] to have phase Bound Nov 13 05:32:29.475: INFO: PersistentVolumeClaim pvc-wksvs found but phase is Pending instead of Bound. Nov 13 05:32:31.479: INFO: PersistentVolumeClaim pvc-wksvs found and phase=Bound (2.00688008s) STEP: Creating pod Nov 13 05:32:43.503: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:32:43.506: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7qtn7] to have phase Bound Nov 13 05:32:43.509: INFO: PersistentVolumeClaim pvc-7qtn7 found but phase is Pending instead of Bound. Nov 13 05:32:45.512: INFO: PersistentVolumeClaim pvc-7qtn7 found and phase=Bound (2.005389847s) STEP: Creating pod Nov 13 05:32:57.535: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:32:57.540: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nvkcr] to have phase Bound Nov 13 05:32:57.542: INFO: PersistentVolumeClaim pvc-nvkcr found but phase is Pending instead of Bound. Nov 13 05:32:59.546: INFO: PersistentVolumeClaim pvc-nvkcr found and phase=Bound (2.005853086s) STEP: Deleting pod pvc-volume-tester-dfdn2 Nov 13 05:33:09.567: INFO: Deleting pod "pvc-volume-tester-dfdn2" in namespace "csi-mock-volumes-9843" Nov 13 05:33:09.572: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dfdn2" to be fully deleted STEP: Deleting pod pvc-volume-tester-6frn7 Nov 13 05:33:13.578: INFO: Deleting pod "pvc-volume-tester-6frn7" in namespace "csi-mock-volumes-9843" Nov 13 05:33:13.583: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6frn7" to be fully deleted STEP: Deleting pod pvc-volume-tester-jsbcx Nov 13 05:33:17.590: INFO: Deleting pod "pvc-volume-tester-jsbcx" in namespace "csi-mock-volumes-9843" Nov 13 05:33:17.597: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jsbcx" to be fully deleted STEP: Deleting claim pvc-wksvs Nov 13 05:33:21.609: INFO: Waiting up to 2m0s for PersistentVolume pvc-de2e055d-d574-4263-a87e-8d5acd8e8234 to get deleted Nov 13 05:33:21.611: INFO: PersistentVolume pvc-de2e055d-d574-4263-a87e-8d5acd8e8234 found and phase=Bound (1.956743ms) Nov 13 05:33:23.613: INFO: PersistentVolume pvc-de2e055d-d574-4263-a87e-8d5acd8e8234 was removed STEP: Deleting claim pvc-7qtn7 Nov 13 05:33:23.619: INFO: Waiting up to 2m0s for PersistentVolume pvc-cc9fe947-03b7-46e1-86e7-69cf1207b3cf to get deleted Nov 13 05:33:23.621: INFO: PersistentVolume pvc-cc9fe947-03b7-46e1-86e7-69cf1207b3cf found and phase=Bound (2.230148ms) Nov 13 05:33:25.626: INFO: PersistentVolume pvc-cc9fe947-03b7-46e1-86e7-69cf1207b3cf was removed STEP: Deleting claim pvc-nvkcr Nov 13 05:33:25.637: INFO: Waiting up to 2m0s for PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 to get deleted Nov 13 05:33:25.639: INFO: PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 found and phase=Bound (2.036287ms) Nov 13 05:33:27.646: INFO: PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 found and phase=Released (2.008656869s) Nov 13 05:33:29.649: INFO: PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 found and phase=Released (4.011764499s) Nov 13 05:33:31.655: INFO: PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 found and phase=Released (6.017451544s) Nov 13 05:33:33.659: INFO: PersistentVolume pvc-1a0e34f6-9f79-4ab7-86b2-d08e0ffafb42 was removed STEP: Deleting storageclass csi-mock-volumes-9843-sc55jc7 STEP: Deleting storageclass csi-mock-volumes-9843-sc6l8l7 STEP: Deleting storageclass csi-mock-volumes-9843-sclfprk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9843 STEP: Waiting for namespaces [csi-mock-volumes-9843] to vanish STEP: uninstalling csi mock driver Nov 13 05:33:39.680: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-attacher Nov 13 05:33:39.684: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9843 Nov 13 05:33:39.689: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9843 Nov 13 05:33:39.692: INFO: deleting *v1.Role: csi-mock-volumes-9843-1385/external-attacher-cfg-csi-mock-volumes-9843 Nov 13 05:33:39.696: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-attacher-role-cfg Nov 13 05:33:39.699: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-provisioner Nov 13 05:33:39.702: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9843 Nov 13 05:33:39.708: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9843 Nov 13 05:33:39.716: INFO: deleting *v1.Role: csi-mock-volumes-9843-1385/external-provisioner-cfg-csi-mock-volumes-9843 Nov 13 05:33:39.725: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-provisioner-role-cfg Nov 13 05:33:39.734: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-resizer Nov 13 05:33:39.738: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9843 Nov 13 05:33:39.742: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9843 Nov 13 05:33:39.745: INFO: deleting *v1.Role: csi-mock-volumes-9843-1385/external-resizer-cfg-csi-mock-volumes-9843 Nov 13 05:33:39.749: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9843-1385/csi-resizer-role-cfg Nov 13 05:33:39.752: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-snapshotter Nov 13 05:33:39.755: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9843 Nov 13 05:33:39.758: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9843 Nov 13 05:33:39.762: INFO: deleting *v1.Role: csi-mock-volumes-9843-1385/external-snapshotter-leaderelection-csi-mock-volumes-9843 Nov 13 05:33:39.766: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9843-1385/external-snapshotter-leaderelection Nov 13 05:33:39.769: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9843-1385/csi-mock Nov 13 05:33:39.772: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9843 Nov 13 05:33:39.776: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9843 Nov 13 05:33:39.781: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9843 Nov 13 05:33:39.828: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9843 Nov 13 05:33:39.832: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9843 Nov 13 05:33:39.835: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9843 Nov 13 05:33:39.839: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9843 Nov 13 05:33:39.842: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9843-1385/csi-mockplugin Nov 13 05:33:39.845: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9843-1385/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9843-1385 STEP: Waiting for namespaces [csi-mock-volumes-9843-1385] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:34:07.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:108.065 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:528 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":9,"skipped":476,"failed":0} Nov 13 05:34:07.871: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:33:02.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-7134 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:33:02.934: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-attacher Nov 13 05:33:02.938: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7134 Nov 13 05:33:02.938: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7134 Nov 13 05:33:02.941: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7134 Nov 13 05:33:02.944: INFO: creating *v1.Role: csi-mock-volumes-7134-5192/external-attacher-cfg-csi-mock-volumes-7134 Nov 13 05:33:02.947: INFO: creating *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-attacher-role-cfg Nov 13 05:33:02.949: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-provisioner Nov 13 05:33:02.951: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7134 Nov 13 05:33:02.951: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7134 Nov 13 05:33:02.955: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7134 Nov 13 05:33:02.957: INFO: creating *v1.Role: csi-mock-volumes-7134-5192/external-provisioner-cfg-csi-mock-volumes-7134 Nov 13 05:33:02.960: INFO: creating *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-provisioner-role-cfg Nov 13 05:33:02.962: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-resizer Nov 13 05:33:02.965: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7134 Nov 13 05:33:02.965: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7134 Nov 13 05:33:02.967: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7134 Nov 13 05:33:02.970: INFO: creating *v1.Role: csi-mock-volumes-7134-5192/external-resizer-cfg-csi-mock-volumes-7134 Nov 13 05:33:02.973: INFO: creating *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-resizer-role-cfg Nov 13 05:33:02.975: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-snapshotter Nov 13 05:33:02.977: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7134 Nov 13 05:33:02.978: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7134 Nov 13 05:33:02.980: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7134 Nov 13 05:33:02.982: INFO: creating *v1.Role: csi-mock-volumes-7134-5192/external-snapshotter-leaderelection-csi-mock-volumes-7134 Nov 13 05:33:02.985: INFO: creating *v1.RoleBinding: csi-mock-volumes-7134-5192/external-snapshotter-leaderelection Nov 13 05:33:02.988: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-mock Nov 13 05:33:02.990: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7134 Nov 13 05:33:02.992: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7134 Nov 13 05:33:02.995: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7134 Nov 13 05:33:02.998: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7134 Nov 13 05:33:03.000: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7134 Nov 13 05:33:03.003: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7134 Nov 13 05:33:03.005: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7134 Nov 13 05:33:03.008: INFO: creating *v1.StatefulSet: csi-mock-volumes-7134-5192/csi-mockplugin Nov 13 05:33:03.012: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7134 Nov 13 05:33:03.015: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7134" Nov 13 05:33:03.018: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7134 to register on node node1 I1113 05:33:08.088397 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7134","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:33:08.185246 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:33:08.187155 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7134","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:33:08.188426 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:33:08.229377 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:33:08.582785 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7134"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:33:12.545: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1113 05:33:12.576766 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:33:12.581278 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f"}}},"Error":"","FullError":null} I1113 05:33:13.865522 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:33:13.870: INFO: >>> kubeConfig: /root/.kube/config I1113 05:33:13.959755 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f","storage.kubernetes.io/csiProvisionerIdentity":"1636781588267-8081-csi-mock-csi-mock-volumes-7134"}},"Response":{},"Error":"","FullError":null} I1113 05:33:13.963352 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:33:13.965: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:14.065: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:33:14.164: INFO: >>> kubeConfig: /root/.kube/config I1113 05:33:14.247000 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f/globalmount","target_path":"/var/lib/kubelet/pods/105f5bbd-3773-4e62-938b-349a2c2d4af1/volumes/kubernetes.io~csi/pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f","storage.kubernetes.io/csiProvisionerIdentity":"1636781588267-8081-csi-mock-csi-mock-volumes-7134"}},"Response":{},"Error":"","FullError":null} Nov 13 05:33:18.566: INFO: Deleting pod "pvc-volume-tester-j6qkf" in namespace "csi-mock-volumes-7134" Nov 13 05:33:18.573: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j6qkf" to be fully deleted Nov 13 05:33:20.303: INFO: >>> kubeConfig: /root/.kube/config I1113 05:33:20.411592 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/105f5bbd-3773-4e62-938b-349a2c2d4af1/volumes/kubernetes.io~csi/pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f/mount"},"Response":{},"Error":"","FullError":null} I1113 05:33:20.506568 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:33:20.508228 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:33:22.598636 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:33:23.586: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193675", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000e2dfe0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00093a1b0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004308230), VolumeMode:(*v1.PersistentVolumeMode)(0xc004308240), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.586: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193678", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003235920), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003235938)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003235950), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003235968)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc005327500), VolumeMode:(*v1.PersistentVolumeMode)(0xc005327510), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.586: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193679", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7134", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052864c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052864e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052864f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286510)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005286528), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286540)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0009309c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000930a20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.586: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193687", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7134", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005286570), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286588)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052865a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052865b8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052865d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052865e8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", StorageClassName:(*string)(0xc000930af0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000930b30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.586: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193688", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7134", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005286618), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286630)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005286648), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286660)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005286678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005286690)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", StorageClassName:(*string)(0xc000930c50), VolumeMode:(*v1.PersistentVolumeMode)(0xc000930c70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.587: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193869", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004163950), DeletionGracePeriodSeconds:(*int64)(0xc00463bea8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7134", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004163968), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004163980)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004163998), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041639b0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041639c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041639e0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", StorageClassName:(*string)(0xc00594acb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00594acc0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:33:23.587: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mqcp2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7134", SelfLink:"", UID:"4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", ResourceVersion:"193870", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772378392, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004163a10), DeletionGracePeriodSeconds:(*int64)(0xc00463bf78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7134", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004163a28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004163a40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004163a58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004163a70)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004163a88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004163aa0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4c9f5c6c-c0d8-4a9e-a9d7-5f11d1eab18f", StorageClassName:(*string)(0xc00594ad00), VolumeMode:(*v1.PersistentVolumeMode)(0xc00594ad10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-j6qkf Nov 13 05:33:23.587: INFO: Deleting pod "pvc-volume-tester-j6qkf" in namespace "csi-mock-volumes-7134" STEP: Deleting claim pvc-mqcp2 STEP: Deleting storageclass csi-mock-volumes-7134-sc7hkfs STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7134 STEP: Waiting for namespaces [csi-mock-volumes-7134] to vanish STEP: uninstalling csi mock driver Nov 13 05:33:29.614: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-attacher Nov 13 05:33:29.619: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7134 Nov 13 05:33:29.623: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7134 Nov 13 05:33:29.627: INFO: deleting *v1.Role: csi-mock-volumes-7134-5192/external-attacher-cfg-csi-mock-volumes-7134 Nov 13 05:33:29.631: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-attacher-role-cfg Nov 13 05:33:29.634: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-provisioner Nov 13 05:33:29.639: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7134 Nov 13 05:33:29.643: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7134 Nov 13 05:33:29.650: INFO: deleting *v1.Role: csi-mock-volumes-7134-5192/external-provisioner-cfg-csi-mock-volumes-7134 Nov 13 05:33:29.657: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-provisioner-role-cfg Nov 13 05:33:29.660: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-resizer Nov 13 05:33:29.667: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7134 Nov 13 05:33:29.670: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7134 Nov 13 05:33:29.674: INFO: deleting *v1.Role: csi-mock-volumes-7134-5192/external-resizer-cfg-csi-mock-volumes-7134 Nov 13 05:33:29.677: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7134-5192/csi-resizer-role-cfg Nov 13 05:33:29.680: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-snapshotter Nov 13 05:33:29.683: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7134 Nov 13 05:33:29.686: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7134 Nov 13 05:33:29.689: INFO: deleting *v1.Role: csi-mock-volumes-7134-5192/external-snapshotter-leaderelection-csi-mock-volumes-7134 Nov 13 05:33:29.694: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7134-5192/external-snapshotter-leaderelection Nov 13 05:33:29.697: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7134-5192/csi-mock Nov 13 05:33:29.700: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7134 Nov 13 05:33:29.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7134 Nov 13 05:33:29.706: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7134 Nov 13 05:33:29.710: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7134 Nov 13 05:33:29.714: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7134 Nov 13 05:33:29.717: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7134 Nov 13 05:33:29.721: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7134 Nov 13 05:33:29.724: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7134-5192/csi-mockplugin Nov 13 05:33:29.727: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7134 STEP: deleting the driver namespace: csi-mock-volumes-7134-5192 STEP: Waiting for namespaces [csi-mock-volumes-7134-5192] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:34:13.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:70.875 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":23,"skipped":989,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} Nov 13 05:34:13.748: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:29:33.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-4460ceea-3239-4340-bb3e-a5947936c978 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:34:33.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2081" for this suite. • [SLOW TEST:300.067 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":6,"skipped":269,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} Nov 13 05:34:33.508: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:28.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-d7d47c09-db76-4f55-97e2-8211daff3776 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:35:28.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5108" for this suite. • [SLOW TEST:300.062 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:30:29.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:35:29.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8588" for this suite. • [SLOW TEST:300.059 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":9,"skipped":359,"failed":0} Nov 13 05:35:29.182: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:31:54.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Nov 13 05:31:59.025: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-847ae8ca-b0c4-4f63-8f74-c6d064ef9312] Namespace:persistent-local-volumes-test-5072 PodName:hostexec-node1-nxh72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:31:59.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:31:59.486: INFO: Creating a PV followed by a PVC Nov 13 05:31:59.492: INFO: Waiting for PV local-pvs7wwk to bind to PVC pvc-xsj9r Nov 13 05:31:59.492: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xsj9r] to have phase Bound Nov 13 05:31:59.495: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:01.497: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:03.500: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:05.504: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:07.508: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:09.510: INFO: PersistentVolumeClaim pvc-xsj9r found but phase is Pending instead of Bound. Nov 13 05:32:11.515: INFO: PersistentVolumeClaim pvc-xsj9r found and phase=Bound (12.022365005s) Nov 13 05:32:11.515: INFO: Waiting up to 3m0s for PersistentVolume local-pvs7wwk to have phase Bound Nov 13 05:32:11.518: INFO: PersistentVolume local-pvs7wwk found and phase=Bound (3.014406ms) STEP: Cleaning up PVC and PV Nov 13 05:37:11.545: INFO: Deleting PersistentVolumeClaim "pvc-xsj9r" Nov 13 05:37:11.550: INFO: Deleting PersistentVolume "local-pvs7wwk" STEP: Removing the test directory Nov 13 05:37:11.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-847ae8ca-b0c4-4f63-8f74-c6d064ef9312] Namespace:persistent-local-volumes-test-5072 PodName:hostexec-node1-nxh72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:37:11.554: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:37:11.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5072" for this suite. • [SLOW TEST:316.684 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":25,"skipped":639,"failed":0} Nov 13 05:37:11.666: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":13,"skipped":450,"failed":0} Nov 13 05:35:28.695: INFO: Running AfterSuite actions on all nodes Nov 13 05:37:11.732: INFO: Running AfterSuite actions on node 1 Nov 13 05:37:11.732: INFO: Skipping dumping logs from cluster Summarizing 2 Failures: [Fail] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 [Fail] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 Ran 163 of 5770 Specs in 1106.484 seconds FAIL! -- 161 Passed | 2 Failed | 0 Pending | 5607 Skipped Ginkgo ran 1 suite in 18m28.04325476s Test Suite Failed