I0321 23:32:50.471008 7 e2e.go:129] Starting e2e run "ae71a830-1338-400c-950f-e202b652d7c9" on Ginkgo node 1 {"msg":"Test Suite starting","total":133,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616369568 - Will randomize all specs Will run 133 of 5737 specs Mar 21 23:32:50.534: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:32:50.536: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 23:32:50.584: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:32:51.064: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:32:51.064: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:32:51.064: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 23:32:51.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 23:32:51.114: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 23:32:51.114: INFO: e2e test version: v1.21.0-beta.1 Mar 21 23:32:51.115: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 21 23:32:51.115: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:32:51.178: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:51.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv Mar 21 23:32:51.401: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 21 23:32:51.403: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:32:51.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9636" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 21 23:32:51.638: INFO: AfterEach: Cleaning up test resources Mar 21 23:32:51.638: INFO: pvc is nil Mar 21 23:32:51.638: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.460 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:51.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1" Mar 21 23:32:56.099: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1 && dd if=/dev/zero of=/tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1/file] Namespace:persistent-local-volumes-test-9525 PodName:hostexec-latest-worker2-jgb9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:32:56.099: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:32:56.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9525 PodName:hostexec-latest-worker2-jgb9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:32:56.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:32:56.526: INFO: Creating a PV followed by a PVC Mar 21 23:32:56.705: INFO: Waiting for PV local-pvwbqt8 to bind to PVC pvc-jw7qd Mar 21 23:32:56.705: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jw7qd] to have phase Bound Mar 21 23:32:57.232: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:32:59.512: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:01.625: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:04.479: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:06.651: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:08.818: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:11.047: INFO: PersistentVolumeClaim pvc-jw7qd found but phase is Pending instead of Bound. Mar 21 23:33:13.513: INFO: PersistentVolumeClaim pvc-jw7qd found and phase=Bound (16.808241906s) Mar 21 23:33:13.513: INFO: Waiting up to 3m0s for PersistentVolume local-pvwbqt8 to have phase Bound Mar 21 23:33:13.556: INFO: PersistentVolume local-pvwbqt8 found and phase=Bound (42.592845ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 21 23:33:26.758: INFO: pod "pod-d636fea6-8554-4c45-a1a2-8461740aae4d" created on Node "latest-worker2" STEP: Writing in pod1 Mar 21 23:33:26.758: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9525 PodName:pod-d636fea6-8554-4c45-a1a2-8461740aae4d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:33:26.758: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:33:27.684: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 21 23:33:27.684: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9525 PodName:pod-d636fea6-8554-4c45-a1a2-8461740aae4d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:33:27.684: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:33:28.078: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d636fea6-8554-4c45-a1a2-8461740aae4d in namespace persistent-local-volumes-test-9525 STEP: Creating pod2 STEP: Creating a pod Mar 21 23:33:38.489: INFO: pod "pod-c101b1be-279d-454c-b121-f4cbb84912d9" created on Node "latest-worker2" STEP: Reading in pod2 Mar 21 23:33:38.489: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9525 PodName:pod-c101b1be-279d-454c-b121-f4cbb84912d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:33:38.489: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:33:38.631: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-c101b1be-279d-454c-b121-f4cbb84912d9 in namespace persistent-local-volumes-test-9525 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:33:38.704: INFO: Deleting PersistentVolumeClaim "pvc-jw7qd" Mar 21 23:33:38.773: INFO: Deleting PersistentVolume "local-pvwbqt8" Mar 21 23:33:38.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9525 PodName:hostexec-latest-worker2-jgb9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:38.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1/file Mar 21 23:33:39.056: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9525 PodName:hostexec-latest-worker2-jgb9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:39.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1 Mar 21 23:33:39.243: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7a92f46e-479d-435e-84a1-c4fa807becf1] Namespace:persistent-local-volumes-test-9525 PodName:hostexec-latest-worker2-jgb9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:39.243: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:40.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9525" for this suite. • [SLOW TEST:48.691 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":1,"skipped":78,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:40.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb" Mar 21 23:33:49.170: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb && dd if=/dev/zero of=/tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb/file] Namespace:persistent-local-volumes-test-1999 PodName:hostexec-latest-worker2-jxph8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:49.170: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:33:50.451: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1999 PodName:hostexec-latest-worker2-jxph8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:50.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:33:51.367: INFO: Creating a PV followed by a PVC Mar 21 23:33:51.598: INFO: Waiting for PV local-pvgs8fc to bind to PVC pvc-f6bzc Mar 21 23:33:51.598: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-f6bzc] to have phase Bound Mar 21 23:33:51.836: INFO: PersistentVolumeClaim pvc-f6bzc found but phase is Pending instead of Bound. Mar 21 23:33:54.221: INFO: PersistentVolumeClaim pvc-f6bzc found and phase=Bound (2.62251805s) Mar 21 23:33:54.221: INFO: Waiting up to 3m0s for PersistentVolume local-pvgs8fc to have phase Bound Mar 21 23:33:54.526: INFO: PersistentVolume local-pvgs8fc found and phase=Bound (305.291249ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 21 23:33:54.905: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:33:54.907: INFO: Deleting PersistentVolumeClaim "pvc-f6bzc" Mar 21 23:33:55.107: INFO: Deleting PersistentVolume "local-pvgs8fc" Mar 21 23:33:55.298: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1999 PodName:hostexec-latest-worker2-jxph8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:55.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb/file Mar 21 23:33:55.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1999 PodName:hostexec-latest-worker2-jxph8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:55.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb Mar 21 23:33:56.287: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0796e751-14dd-4e26-be4b-780b6b0429fb] Namespace:persistent-local-volumes-test-1999 PodName:hostexec-latest-worker2-jxph8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:33:56.287: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:58.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1999" for this suite. S [SKIPPING] [18.220 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:58.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvvmjsz [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:35:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8193" for this suite. • [SLOW TEST:96.848 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":133,"completed":2,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:35:35.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-9492 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:35:37.069: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-attacher Mar 21 23:35:37.076: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9492 Mar 21 23:35:37.076: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9492 Mar 21 23:35:37.093: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9492 Mar 21 23:35:37.099: INFO: creating *v1.Role: csi-mock-volumes-9492-9274/external-attacher-cfg-csi-mock-volumes-9492 Mar 21 23:35:37.162: INFO: creating *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-attacher-role-cfg Mar 21 23:35:37.176: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-provisioner Mar 21 23:35:37.196: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9492 Mar 21 23:35:37.196: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9492 Mar 21 23:35:37.249: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9492 Mar 21 23:35:37.317: INFO: creating *v1.Role: csi-mock-volumes-9492-9274/external-provisioner-cfg-csi-mock-volumes-9492 Mar 21 23:35:37.343: INFO: creating *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-provisioner-role-cfg Mar 21 23:35:37.381: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-resizer Mar 21 23:35:37.400: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9492 Mar 21 23:35:37.400: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9492 Mar 21 23:35:37.405: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9492 Mar 21 23:35:37.411: INFO: creating *v1.Role: csi-mock-volumes-9492-9274/external-resizer-cfg-csi-mock-volumes-9492 Mar 21 23:35:37.454: INFO: creating *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-resizer-role-cfg Mar 21 23:35:37.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-snapshotter Mar 21 23:35:37.555: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9492 Mar 21 23:35:37.555: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9492 Mar 21 23:35:37.665: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9492 Mar 21 23:35:38.174: INFO: creating *v1.Role: csi-mock-volumes-9492-9274/external-snapshotter-leaderelection-csi-mock-volumes-9492 Mar 21 23:35:38.457: INFO: creating *v1.RoleBinding: csi-mock-volumes-9492-9274/external-snapshotter-leaderelection Mar 21 23:35:38.647: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-mock Mar 21 23:35:38.718: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9492 Mar 21 23:35:38.735: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9492 Mar 21 23:35:38.772: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9492 Mar 21 23:35:38.782: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9492 Mar 21 23:35:38.808: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9492 Mar 21 23:35:38.833: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9492 Mar 21 23:35:38.860: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9492 Mar 21 23:35:38.910: INFO: creating *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin Mar 21 23:35:38.923: INFO: creating *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin-attacher Mar 21 23:35:38.972: INFO: creating *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin-resizer Mar 21 23:35:39.110: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9492 to register on node latest-worker2 STEP: Creating pod Mar 21 23:35:57.449: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:35:57.664: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-78wgf] to have phase Bound Mar 21 23:35:57.740: INFO: PersistentVolumeClaim pvc-78wgf found but phase is Pending instead of Bound. Mar 21 23:35:59.772: INFO: PersistentVolumeClaim pvc-78wgf found and phase=Bound (2.107819353s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 21 23:36:33.691: INFO: Deleting pod "pvc-volume-tester-p8sn5" in namespace "csi-mock-volumes-9492" Mar 21 23:36:34.135: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p8sn5" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-p8sn5 Mar 21 23:37:42.797: INFO: Deleting pod "pvc-volume-tester-p8sn5" in namespace "csi-mock-volumes-9492" STEP: Deleting pod pvc-volume-tester-mjnh5 Mar 21 23:37:42.868: INFO: Deleting pod "pvc-volume-tester-mjnh5" in namespace "csi-mock-volumes-9492" Mar 21 23:37:42.927: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mjnh5" to be fully deleted STEP: Deleting claim pvc-78wgf Mar 21 23:37:55.488: INFO: Waiting up to 2m0s for PersistentVolume pvc-2f3ef82a-c009-48d6-8272-5e25343e5d13 to get deleted Mar 21 23:37:55.557: INFO: PersistentVolume pvc-2f3ef82a-c009-48d6-8272-5e25343e5d13 found and phase=Bound (68.698625ms) Mar 21 23:37:57.612: INFO: PersistentVolume pvc-2f3ef82a-c009-48d6-8272-5e25343e5d13 was removed STEP: Deleting storageclass csi-mock-volumes-9492-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9492 STEP: Waiting for namespaces [csi-mock-volumes-9492] to vanish STEP: uninstalling csi mock driver Mar 21 23:38:07.795: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-attacher Mar 21 23:38:07.973: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9492 Mar 21 23:38:08.000: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9492 Mar 21 23:38:08.134: INFO: deleting *v1.Role: csi-mock-volumes-9492-9274/external-attacher-cfg-csi-mock-volumes-9492 Mar 21 23:38:08.174: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-attacher-role-cfg Mar 21 23:38:08.277: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-provisioner Mar 21 23:38:08.327: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9492 Mar 21 23:38:08.357: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9492 Mar 21 23:38:08.456: INFO: deleting *v1.Role: csi-mock-volumes-9492-9274/external-provisioner-cfg-csi-mock-volumes-9492 Mar 21 23:38:08.503: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-provisioner-role-cfg Mar 21 23:38:08.574: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-resizer Mar 21 23:38:08.703: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9492 Mar 21 23:38:08.759: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9492 Mar 21 23:38:08.847: INFO: deleting *v1.Role: csi-mock-volumes-9492-9274/external-resizer-cfg-csi-mock-volumes-9492 Mar 21 23:38:08.905: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9492-9274/csi-resizer-role-cfg Mar 21 23:38:09.049: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-snapshotter Mar 21 23:38:09.106: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9492 Mar 21 23:38:09.179: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9492 Mar 21 23:38:09.235: INFO: deleting *v1.Role: csi-mock-volumes-9492-9274/external-snapshotter-leaderelection-csi-mock-volumes-9492 Mar 21 23:38:09.320: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9492-9274/external-snapshotter-leaderelection Mar 21 23:38:09.388: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9492-9274/csi-mock Mar 21 23:38:09.493: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9492 Mar 21 23:38:09.591: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9492 Mar 21 23:38:09.617: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9492 Mar 21 23:38:09.733: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9492 Mar 21 23:38:09.738: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9492 Mar 21 23:38:09.762: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9492 Mar 21 23:38:09.817: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9492 Mar 21 23:38:09.890: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin Mar 21 23:38:09.939: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin-attacher Mar 21 23:38:10.026: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9492-9274/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-9492-9274 STEP: Waiting for namespaces [csi-mock-volumes-9492-9274] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:38:50.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:194.830 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":133,"completed":3,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:38:50.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 23:38:50.703: INFO: Waiting up to 5m0s for pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b" in namespace "emptydir-3576" to be "Succeeded or Failed" Mar 21 23:38:50.760: INFO: Pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.610086ms Mar 21 23:38:52.821: INFO: Pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117519681s Mar 21 23:38:55.030: INFO: Pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b": Phase="Running", Reason="", readiness=true. Elapsed: 4.32657076s Mar 21 23:38:57.273: INFO: Pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.569499212s STEP: Saw pod success Mar 21 23:38:57.273: INFO: Pod "pod-c2b70cd2-c4df-42f0-888f-fb608849315b" satisfied condition "Succeeded or Failed" Mar 21 23:38:57.453: INFO: Trying to get logs from node latest-worker2 pod pod-c2b70cd2-c4df-42f0-888f-fb608849315b container test-container: STEP: delete the pod Mar 21 23:38:59.054: INFO: Waiting for pod pod-c2b70cd2-c4df-42f0-888f-fb608849315b to disappear Mar 21 23:38:59.288: INFO: Pod pod-c2b70cd2-c4df-42f0-888f-fb608849315b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:38:59.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3576" for this suite. • [SLOW TEST:9.513 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":133,"completed":4,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:38:59.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 21 23:39:05.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-806ad802-190a-46b5-b0ff-c4ea20898ad6] Namespace:persistent-local-volumes-test-2595 PodName:hostexec-latest-worker2-tq24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:39:05.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:39:05.466: INFO: Creating a PV followed by a PVC Mar 21 23:39:05.663: INFO: Waiting for PV local-pv66pms to bind to PVC pvc-9rqbb Mar 21 23:39:05.663: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-9rqbb] to have phase Bound Mar 21 23:39:05.830: INFO: PersistentVolumeClaim pvc-9rqbb found but phase is Pending instead of Bound. Mar 21 23:39:07.841: INFO: PersistentVolumeClaim pvc-9rqbb found but phase is Pending instead of Bound. Mar 21 23:39:09.919: INFO: PersistentVolumeClaim pvc-9rqbb found but phase is Pending instead of Bound. Mar 21 23:39:11.949: INFO: PersistentVolumeClaim pvc-9rqbb found and phase=Bound (6.286086364s) Mar 21 23:39:11.949: INFO: Waiting up to 3m0s for PersistentVolume local-pv66pms to have phase Bound Mar 21 23:39:11.979: INFO: PersistentVolume local-pv66pms found and phase=Bound (30.128919ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 21 23:39:18.292: INFO: pod "pod-bebd15ea-9b78-453c-9826-58bfc004c26a" created on Node "latest-worker2" STEP: Writing in pod1 Mar 21 23:39:18.292: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2595 PodName:pod-bebd15ea-9b78-453c-9826-58bfc004c26a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:18.292: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:18.420: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 21 23:39:18.420: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2595 PodName:pod-bebd15ea-9b78-453c-9826-58bfc004c26a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:18.420: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:18.529: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-bebd15ea-9b78-453c-9826-58bfc004c26a in namespace persistent-local-volumes-test-2595 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:39:18.621: INFO: Deleting PersistentVolumeClaim "pvc-9rqbb" Mar 21 23:39:18.658: INFO: Deleting PersistentVolume "local-pv66pms" STEP: Removing the test directory Mar 21 23:39:18.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-806ad802-190a-46b5-b0ff-c4ea20898ad6] Namespace:persistent-local-volumes-test-2595 PodName:hostexec-latest-worker2-tq24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:39:18.729: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:20.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2595" for this suite. • [SLOW TEST:20.931 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":5,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:20.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 21 23:39:25.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-104 PodName:hostexec-latest-worker-n76bc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:39:25.098: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:25.258: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 21 23:39:25.258: INFO: exec latest-worker: stdout: "0\n" Mar 21 23:39:25.258: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 21 23:39:25.258: INFO: exec latest-worker: exit code: 0 Mar 21 23:39:25.258: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:25.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-104" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.669 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:25.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-ddac8ebb-6a65-46a7-995b-14fabd8a8b48 STEP: Creating a pod to test consume secrets Mar 21 23:39:25.746: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71" in namespace "projected-7500" to be "Succeeded or Failed" Mar 21 23:39:25.830: INFO: Pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71": Phase="Pending", Reason="", readiness=false. Elapsed: 83.798096ms Mar 21 23:39:28.213: INFO: Pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46745988s Mar 21 23:39:30.224: INFO: Pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71": Phase="Running", Reason="", readiness=true. Elapsed: 4.478753459s Mar 21 23:39:32.249: INFO: Pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.503193093s STEP: Saw pod success Mar 21 23:39:32.249: INFO: Pod "pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71" satisfied condition "Succeeded or Failed" Mar 21 23:39:32.290: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71 container projected-secret-volume-test: STEP: delete the pod Mar 21 23:39:32.490: INFO: Waiting for pod pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71 to disappear Mar 21 23:39:32.499: INFO: Pod pod-projected-secrets-2f118536-93b6-41e7-a7f4-71157f269d71 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:32.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7500" for this suite. STEP: Destroying namespace "secret-namespace-4347" for this suite. • [SLOW TEST:7.367 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":133,"completed":6,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:32.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Mar 21 23:39:33.208: INFO: Waiting up to 5m0s for pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6" in namespace "downward-api-1822" to be "Succeeded or Failed" Mar 21 23:39:33.248: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.998609ms Mar 21 23:39:35.331: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123250751s Mar 21 23:39:37.851: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642924047s Mar 21 23:39:39.915: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707195656s Mar 21 23:39:41.926: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.71814331s STEP: Saw pod success Mar 21 23:39:41.926: INFO: Pod "metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6" satisfied condition "Succeeded or Failed" Mar 21 23:39:41.956: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6 container client-container: STEP: delete the pod Mar 21 23:39:42.393: INFO: Waiting for pod metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6 to disappear Mar 21 23:39:42.440: INFO: Pod metadata-volume-241f95fd-9676-472a-b568-4bcb39d602d6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:42.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1822" for this suite. • [SLOW TEST:9.786 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":7,"skipped":410,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes GlusterFS should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:42.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 21 23:39:42.818: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:42.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-9873" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.568 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:43.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Mar 21 23:39:45.007: INFO: Waiting up to 5m0s for pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4" in namespace "downward-api-5759" to be "Succeeded or Failed" Mar 21 23:39:45.147: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 140.834517ms Mar 21 23:39:47.424: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417392209s Mar 21 23:39:50.430: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.423761948s Mar 21 23:39:53.009: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.002786949s Mar 21 23:39:55.123: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116811096s STEP: Saw pod success Mar 21 23:39:55.123: INFO: Pod "metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4" satisfied condition "Succeeded or Failed" Mar 21 23:39:55.126: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4 container client-container: STEP: delete the pod Mar 21 23:39:55.782: INFO: Waiting for pod metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4 to disappear Mar 21 23:39:55.863: INFO: Pod metadata-volume-1c32df0b-0de9-43e1-82b3-56c4601c34c4 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5759" for this suite. • [SLOW TEST:13.051 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":8,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes ConfigMap should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:56.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Mar 21 23:40:05.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=volume-8574 exec configmap-client --namespace=volume-8574 -- cat /opt/0/firstfile' Mar 21 23:40:20.007: INFO: stderr: "" Mar 21 23:40:20.007: INFO: stdout: "this is the first file" Mar 21 23:40:20.007: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-8574 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:40:20.007: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:40:20.112: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-8574 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:40:20.112: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:40:20.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=volume-8574 exec configmap-client --namespace=volume-8574 -- cat /opt/1/secondfile' Mar 21 23:40:20.600: INFO: stderr: "" Mar 21 23:40:20.601: INFO: stdout: "this is the second file" Mar 21 23:40:20.601: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-8574 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:40:20.601: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:40:20.745: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-8574 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:40:20.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-8574 Mar 21 23:40:20.875: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:21.040: INFO: Pod configmap-client still exists Mar 21 23:40:23.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:23.574: INFO: Pod configmap-client still exists Mar 21 23:40:25.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:25.083: INFO: Pod configmap-client still exists Mar 21 23:40:27.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:27.219: INFO: Pod configmap-client still exists Mar 21 23:40:29.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:29.066: INFO: Pod configmap-client still exists Mar 21 23:40:31.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:31.068: INFO: Pod configmap-client still exists Mar 21 23:40:33.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:33.046: INFO: Pod configmap-client still exists Mar 21 23:40:35.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:35.052: INFO: Pod configmap-client still exists Mar 21 23:40:37.042: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:37.174: INFO: Pod configmap-client still exists Mar 21 23:40:39.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:39.200: INFO: Pod configmap-client still exists Mar 21 23:40:41.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:41.072: INFO: Pod configmap-client still exists Mar 21 23:40:43.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:43.048: INFO: Pod configmap-client still exists Mar 21 23:40:45.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:45.074: INFO: Pod configmap-client still exists Mar 21 23:40:47.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:47.304: INFO: Pod configmap-client still exists Mar 21 23:40:49.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:49.119: INFO: Pod configmap-client still exists Mar 21 23:40:51.041: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:51.073: INFO: Pod configmap-client still exists Mar 21 23:40:53.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:53.352: INFO: Pod configmap-client still exists Mar 21 23:40:55.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:55.058: INFO: Pod configmap-client still exists Mar 21 23:40:57.040: INFO: Waiting for pod configmap-client to disappear Mar 21 23:40:57.065: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8574" for this suite. • [SLOW TEST:61.811 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":133,"completed":9,"skipped":473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:57.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 21 23:40:58.193: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:58.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4100" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.327 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:58.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-5776 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:40:58.698: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-attacher Mar 21 23:40:58.772: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5776 Mar 21 23:40:58.772: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5776 Mar 21 23:40:58.794: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5776 Mar 21 23:40:58.814: INFO: creating *v1.Role: csi-mock-volumes-5776-8577/external-attacher-cfg-csi-mock-volumes-5776 Mar 21 23:40:58.857: INFO: creating *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-attacher-role-cfg Mar 21 23:40:58.958: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-provisioner Mar 21 23:40:58.976: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5776 Mar 21 23:40:58.976: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5776 Mar 21 23:40:59.004: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5776 Mar 21 23:40:59.018: INFO: creating *v1.Role: csi-mock-volumes-5776-8577/external-provisioner-cfg-csi-mock-volumes-5776 Mar 21 23:40:59.041: INFO: creating *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-provisioner-role-cfg Mar 21 23:40:59.094: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-resizer Mar 21 23:40:59.126: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5776 Mar 21 23:40:59.126: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5776 Mar 21 23:40:59.144: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5776 Mar 21 23:40:59.168: INFO: creating *v1.Role: csi-mock-volumes-5776-8577/external-resizer-cfg-csi-mock-volumes-5776 Mar 21 23:40:59.186: INFO: creating *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-resizer-role-cfg Mar 21 23:40:59.232: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-snapshotter Mar 21 23:40:59.246: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5776 Mar 21 23:40:59.246: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5776 Mar 21 23:40:59.273: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5776 Mar 21 23:40:59.281: INFO: creating *v1.Role: csi-mock-volumes-5776-8577/external-snapshotter-leaderelection-csi-mock-volumes-5776 Mar 21 23:40:59.287: INFO: creating *v1.RoleBinding: csi-mock-volumes-5776-8577/external-snapshotter-leaderelection Mar 21 23:40:59.331: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-mock Mar 21 23:40:59.389: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5776 Mar 21 23:40:59.418: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5776 Mar 21 23:40:59.456: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5776 Mar 21 23:40:59.624: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5776 Mar 21 23:41:00.096: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5776 Mar 21 23:41:00.313: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5776 Mar 21 23:41:00.348: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5776 Mar 21 23:41:00.390: INFO: creating *v1.StatefulSet: csi-mock-volumes-5776-8577/csi-mockplugin Mar 21 23:41:00.450: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5776 Mar 21 23:41:00.525: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5776" Mar 21 23:41:00.539: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5776 to register on node latest-worker2 STEP: Creating pod with fsGroup Mar 21 23:41:16.433: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:41:16.559: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-5485s] to have phase Bound Mar 21 23:41:16.658: INFO: PersistentVolumeClaim pvc-5485s found but phase is Pending instead of Bound. Mar 21 23:41:18.675: INFO: PersistentVolumeClaim pvc-5485s found and phase=Bound (2.115437671s) Mar 21 23:41:24.910: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-5776] Namespace:csi-mock-volumes-5776 PodName:pvc-volume-tester-j8xwq ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:41:24.910: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:41:25.103: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-5776/csi-mock-volumes-5776'; sync] Namespace:csi-mock-volumes-5776 PodName:pvc-volume-tester-j8xwq ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:41:25.103: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:42:28.135: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-5776/csi-mock-volumes-5776] Namespace:csi-mock-volumes-5776 PodName:pvc-volume-tester-j8xwq ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:42:28.135: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:42:28.305: INFO: pod csi-mock-volumes-5776/pvc-volume-tester-j8xwq exec for cmd ls -l /mnt/test/csi-mock-volumes-5776/csi-mock-volumes-5776, stdout: -rw-r--r-- 1 root 7823 13 Mar 21 23:41 /mnt/test/csi-mock-volumes-5776/csi-mock-volumes-5776, stderr: Mar 21 23:42:28.305: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-5776] Namespace:csi-mock-volumes-5776 PodName:pvc-volume-tester-j8xwq ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:42:28.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-j8xwq Mar 21 23:42:28.407: INFO: Deleting pod "pvc-volume-tester-j8xwq" in namespace "csi-mock-volumes-5776" Mar 21 23:42:28.463: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j8xwq" to be fully deleted STEP: Deleting claim pvc-5485s Mar 21 23:43:46.611: INFO: Waiting up to 2m0s for PersistentVolume pvc-58e512bf-d1db-4c0d-b8a7-4092edc238c0 to get deleted Mar 21 23:43:46.618: INFO: PersistentVolume pvc-58e512bf-d1db-4c0d-b8a7-4092edc238c0 found and phase=Bound (6.600597ms) Mar 21 23:43:48.656: INFO: PersistentVolume pvc-58e512bf-d1db-4c0d-b8a7-4092edc238c0 was removed STEP: Deleting storageclass csi-mock-volumes-5776-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5776 STEP: Waiting for namespaces [csi-mock-volumes-5776] to vanish STEP: uninstalling csi mock driver Mar 21 23:44:02.812: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-attacher Mar 21 23:44:02.920: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5776 Mar 21 23:44:03.019: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5776 Mar 21 23:44:03.073: INFO: deleting *v1.Role: csi-mock-volumes-5776-8577/external-attacher-cfg-csi-mock-volumes-5776 Mar 21 23:44:03.169: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-attacher-role-cfg Mar 21 23:44:03.215: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-provisioner Mar 21 23:44:03.332: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5776 Mar 21 23:44:03.370: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5776 Mar 21 23:44:03.393: INFO: deleting *v1.Role: csi-mock-volumes-5776-8577/external-provisioner-cfg-csi-mock-volumes-5776 Mar 21 23:44:03.424: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-provisioner-role-cfg Mar 21 23:44:03.534: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-resizer Mar 21 23:44:03.666: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5776 Mar 21 23:44:03.723: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5776 Mar 21 23:44:03.858: INFO: deleting *v1.Role: csi-mock-volumes-5776-8577/external-resizer-cfg-csi-mock-volumes-5776 Mar 21 23:44:04.035: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5776-8577/csi-resizer-role-cfg Mar 21 23:44:04.079: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-snapshotter Mar 21 23:44:04.144: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5776 Mar 21 23:44:04.217: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5776 Mar 21 23:44:04.276: INFO: deleting *v1.Role: csi-mock-volumes-5776-8577/external-snapshotter-leaderelection-csi-mock-volumes-5776 Mar 21 23:44:04.319: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5776-8577/external-snapshotter-leaderelection Mar 21 23:44:04.414: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5776-8577/csi-mock Mar 21 23:44:04.444: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5776 Mar 21 23:44:04.618: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5776 Mar 21 23:44:05.272: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5776 Mar 21 23:44:05.446: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5776 Mar 21 23:44:05.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5776 Mar 21 23:44:05.998: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5776 Mar 21 23:44:06.089: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5776 Mar 21 23:44:06.242: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5776-8577/csi-mockplugin Mar 21 23:44:06.367: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5776 STEP: deleting the driver namespace: csi-mock-volumes-5776-8577 STEP: Waiting for namespaces [csi-mock-volumes-5776-8577] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:44:20.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:202.344 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":133,"completed":10,"skipped":539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:44:20.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 21 23:44:20.788: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:44:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-103" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.274 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:44:20.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:44:32.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9492" for this suite. • [SLOW TEST:11.887 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":133,"completed":11,"skipped":571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:44:32.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 21 23:44:37.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13 && mount --bind /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13 /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13] Namespace:persistent-local-volumes-test-1038 PodName:hostexec-latest-worker-qstnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:44:37.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:44:37.343: INFO: Creating a PV followed by a PVC Mar 21 23:44:37.469: INFO: Waiting for PV local-pvpvrr2 to bind to PVC pvc-bvv25 Mar 21 23:44:37.469: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-bvv25] to have phase Bound Mar 21 23:44:37.543: INFO: PersistentVolumeClaim pvc-bvv25 found but phase is Pending instead of Bound. Mar 21 23:44:39.556: INFO: PersistentVolumeClaim pvc-bvv25 found but phase is Pending instead of Bound. Mar 21 23:44:41.575: INFO: PersistentVolumeClaim pvc-bvv25 found and phase=Bound (4.105743029s) Mar 21 23:44:41.575: INFO: Waiting up to 3m0s for PersistentVolume local-pvpvrr2 to have phase Bound Mar 21 23:44:41.617: INFO: PersistentVolume local-pvpvrr2 found and phase=Bound (42.454908ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 21 23:44:47.813: INFO: pod "pod-4af5d612-da62-4618-b6b9-92f5c8e89cbd" created on Node "latest-worker" STEP: Writing in pod1 Mar 21 23:44:47.813: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1038 PodName:pod-4af5d612-da62-4618-b6b9-92f5c8e89cbd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:47.814: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:47.946: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 21 23:44:47.946: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1038 PodName:pod-4af5d612-da62-4618-b6b9-92f5c8e89cbd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:47.946: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:48.045: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 21 23:44:54.467: INFO: pod "pod-877e7d52-91ad-4143-9b03-5838b7cc6ca7" created on Node "latest-worker" Mar 21 23:44:54.467: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1038 PodName:pod-877e7d52-91ad-4143-9b03-5838b7cc6ca7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:54.467: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:54.850: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 21 23:44:54.850: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1038 PodName:pod-877e7d52-91ad-4143-9b03-5838b7cc6ca7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:54.850: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:54.999: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 21 23:44:54.999: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1038 PodName:pod-4af5d612-da62-4618-b6b9-92f5c8e89cbd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:54.999: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:55.173: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-4af5d612-da62-4618-b6b9-92f5c8e89cbd in namespace persistent-local-volumes-test-1038 STEP: Deleting pod2 STEP: Deleting pod pod-877e7d52-91ad-4143-9b03-5838b7cc6ca7 in namespace persistent-local-volumes-test-1038 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:44:55.338: INFO: Deleting PersistentVolumeClaim "pvc-bvv25" Mar 21 23:44:55.367: INFO: Deleting PersistentVolume "local-pvpvrr2" STEP: Removing the test directory Mar 21 23:44:55.396: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13 && rm -r /tmp/local-volume-test-fefb6066-c3e5-4716-9b36-460b973fcc13] Namespace:persistent-local-volumes-test-1038 PodName:hostexec-latest-worker-qstnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:44:55.396: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:44:55.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1038" for this suite. • [SLOW TEST:22.907 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":12,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:44:55.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-9555 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:44:56.747: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-attacher Mar 21 23:44:57.062: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9555 Mar 21 23:44:57.062: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9555 Mar 21 23:44:57.156: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9555 Mar 21 23:44:57.271: INFO: creating *v1.Role: csi-mock-volumes-9555-2349/external-attacher-cfg-csi-mock-volumes-9555 Mar 21 23:44:57.286: INFO: creating *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-attacher-role-cfg Mar 21 23:44:57.542: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-provisioner Mar 21 23:44:57.883: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9555 Mar 21 23:44:57.883: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9555 Mar 21 23:44:58.127: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9555 Mar 21 23:44:58.169: INFO: creating *v1.Role: csi-mock-volumes-9555-2349/external-provisioner-cfg-csi-mock-volumes-9555 Mar 21 23:44:58.180: INFO: creating *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-provisioner-role-cfg Mar 21 23:44:58.469: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-resizer Mar 21 23:44:58.534: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9555 Mar 21 23:44:58.534: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9555 Mar 21 23:44:58.654: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9555 Mar 21 23:44:58.823: INFO: creating *v1.Role: csi-mock-volumes-9555-2349/external-resizer-cfg-csi-mock-volumes-9555 Mar 21 23:44:58.888: INFO: creating *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-resizer-role-cfg Mar 21 23:44:58.990: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-snapshotter Mar 21 23:44:59.254: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9555 Mar 21 23:44:59.254: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9555 Mar 21 23:44:59.331: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9555 Mar 21 23:44:59.554: INFO: creating *v1.Role: csi-mock-volumes-9555-2349/external-snapshotter-leaderelection-csi-mock-volumes-9555 Mar 21 23:44:59.903: INFO: creating *v1.RoleBinding: csi-mock-volumes-9555-2349/external-snapshotter-leaderelection Mar 21 23:44:59.911: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-mock Mar 21 23:44:59.951: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9555 Mar 21 23:44:59.987: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9555 Mar 21 23:45:00.043: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9555 Mar 21 23:45:00.048: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9555 Mar 21 23:45:00.055: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9555 Mar 21 23:45:00.083: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9555 Mar 21 23:45:00.106: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9555 Mar 21 23:45:00.121: INFO: creating *v1.StatefulSet: csi-mock-volumes-9555-2349/csi-mockplugin Mar 21 23:45:00.133: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9555 Mar 21 23:45:00.186: INFO: creating *v1.StatefulSet: csi-mock-volumes-9555-2349/csi-mockplugin-attacher Mar 21 23:45:00.251: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9555" Mar 21 23:45:00.353: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9555 to register on node latest-worker STEP: Creating pod Mar 21 23:45:17.912: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:45:18.010: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nqchb] to have phase Bound Mar 21 23:45:18.043: INFO: PersistentVolumeClaim pvc-nqchb found but phase is Pending instead of Bound. Mar 21 23:45:20.530: INFO: PersistentVolumeClaim pvc-nqchb found and phase=Bound (2.51932656s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-kv8gj Mar 21 23:45:45.193: INFO: Deleting pod "pvc-volume-tester-kv8gj" in namespace "csi-mock-volumes-9555" Mar 21 23:45:45.285: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kv8gj" to be fully deleted STEP: Deleting claim pvc-nqchb Mar 21 23:46:08.004: INFO: Waiting up to 2m0s for PersistentVolume pvc-56fea359-9920-408f-ae25-9766ab62d9af to get deleted Mar 21 23:46:08.272: INFO: PersistentVolume pvc-56fea359-9920-408f-ae25-9766ab62d9af found and phase=Bound (267.716653ms) Mar 21 23:46:10.481: INFO: PersistentVolume pvc-56fea359-9920-408f-ae25-9766ab62d9af was removed STEP: Deleting storageclass csi-mock-volumes-9555-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9555 STEP: Waiting for namespaces [csi-mock-volumes-9555] to vanish STEP: uninstalling csi mock driver Mar 21 23:46:29.437: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-attacher Mar 21 23:46:29.572: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9555 Mar 21 23:46:29.615: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9555 Mar 21 23:46:29.657: INFO: deleting *v1.Role: csi-mock-volumes-9555-2349/external-attacher-cfg-csi-mock-volumes-9555 Mar 21 23:46:29.736: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-attacher-role-cfg Mar 21 23:46:29.760: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-provisioner Mar 21 23:46:29.795: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9555 Mar 21 23:46:29.898: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9555 Mar 21 23:46:29.962: INFO: deleting *v1.Role: csi-mock-volumes-9555-2349/external-provisioner-cfg-csi-mock-volumes-9555 Mar 21 23:46:29.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-provisioner-role-cfg Mar 21 23:46:30.047: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-resizer Mar 21 23:46:30.113: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9555 Mar 21 23:46:30.186: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9555 Mar 21 23:46:30.217: INFO: deleting *v1.Role: csi-mock-volumes-9555-2349/external-resizer-cfg-csi-mock-volumes-9555 Mar 21 23:46:30.304: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9555-2349/csi-resizer-role-cfg Mar 21 23:46:30.365: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-snapshotter Mar 21 23:46:30.476: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9555 Mar 21 23:46:30.508: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9555 Mar 21 23:46:30.609: INFO: deleting *v1.Role: csi-mock-volumes-9555-2349/external-snapshotter-leaderelection-csi-mock-volumes-9555 Mar 21 23:46:30.653: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9555-2349/external-snapshotter-leaderelection Mar 21 23:46:30.695: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9555-2349/csi-mock Mar 21 23:46:30.773: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9555 Mar 21 23:46:30.871: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9555 Mar 21 23:46:30.882: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9555 Mar 21 23:46:30.931: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9555 Mar 21 23:46:30.992: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9555 Mar 21 23:46:31.057: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9555 Mar 21 23:46:31.154: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9555 Mar 21 23:46:31.218: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9555-2349/csi-mockplugin Mar 21 23:46:31.248: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9555 Mar 21 23:46:31.300: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9555-2349/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9555-2349 STEP: Waiting for namespaces [csi-mock-volumes-9555-2349] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:47:11.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:136.049 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":133,"completed":13,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:47:11.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 21 23:47:12.198: INFO: Waiting up to 5m0s for pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92" in namespace "emptydir-5813" to be "Succeeded or Failed" Mar 21 23:47:12.274: INFO: Pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92": Phase="Pending", Reason="", readiness=false. Elapsed: 76.043002ms Mar 21 23:47:14.723: INFO: Pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524587407s Mar 21 23:47:16.875: INFO: Pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92": Phase="Running", Reason="", readiness=true. Elapsed: 4.676846663s Mar 21 23:47:18.923: INFO: Pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.724875186s STEP: Saw pod success Mar 21 23:47:18.923: INFO: Pod "pod-48622f8f-ab94-4005-9816-0d6e24173e92" satisfied condition "Succeeded or Failed" Mar 21 23:47:19.047: INFO: Trying to get logs from node latest-worker pod pod-48622f8f-ab94-4005-9816-0d6e24173e92 container test-container: STEP: delete the pod Mar 21 23:47:19.510: INFO: Waiting for pod pod-48622f8f-ab94-4005-9816-0d6e24173e92 to disappear Mar 21 23:47:19.515: INFO: Pod pod-48622f8f-ab94-4005-9816-0d6e24173e92 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:47:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5813" for this suite. • [SLOW TEST:8.287 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":133,"completed":14,"skipped":749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:47:20.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-4448 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:47:21.983: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-attacher Mar 21 23:47:22.161: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4448 Mar 21 23:47:22.161: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4448 Mar 21 23:47:22.533: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4448 Mar 21 23:47:22.966: INFO: creating *v1.Role: csi-mock-volumes-4448-6479/external-attacher-cfg-csi-mock-volumes-4448 Mar 21 23:47:23.189: INFO: creating *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-attacher-role-cfg Mar 21 23:47:23.394: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-provisioner Mar 21 23:47:23.411: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4448 Mar 21 23:47:23.411: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4448 Mar 21 23:47:23.481: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4448 Mar 21 23:47:23.580: INFO: creating *v1.Role: csi-mock-volumes-4448-6479/external-provisioner-cfg-csi-mock-volumes-4448 Mar 21 23:47:23.615: INFO: creating *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-provisioner-role-cfg Mar 21 23:47:23.687: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-resizer Mar 21 23:47:23.711: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4448 Mar 21 23:47:23.711: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4448 Mar 21 23:47:23.743: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4448 Mar 21 23:47:23.756: INFO: creating *v1.Role: csi-mock-volumes-4448-6479/external-resizer-cfg-csi-mock-volumes-4448 Mar 21 23:47:23.777: INFO: creating *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-resizer-role-cfg Mar 21 23:47:23.818: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-snapshotter Mar 21 23:47:23.825: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4448 Mar 21 23:47:23.825: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4448 Mar 21 23:47:23.841: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4448 Mar 21 23:47:23.846: INFO: creating *v1.Role: csi-mock-volumes-4448-6479/external-snapshotter-leaderelection-csi-mock-volumes-4448 Mar 21 23:47:23.870: INFO: creating *v1.RoleBinding: csi-mock-volumes-4448-6479/external-snapshotter-leaderelection Mar 21 23:47:23.915: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-mock Mar 21 23:47:23.962: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4448 Mar 21 23:47:23.978: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4448 Mar 21 23:47:23.984: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4448 Mar 21 23:47:24.006: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4448 Mar 21 23:47:24.029: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4448 Mar 21 23:47:24.037: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4448 Mar 21 23:47:24.043: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4448 Mar 21 23:47:24.060: INFO: creating *v1.StatefulSet: csi-mock-volumes-4448-6479/csi-mockplugin Mar 21 23:47:24.088: INFO: creating *v1.StatefulSet: csi-mock-volumes-4448-6479/csi-mockplugin-attacher Mar 21 23:47:24.113: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4448 to register on node latest-worker2 STEP: Creating pod Mar 21 23:47:34.483: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:47:34.544: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-hdpbx] to have phase Bound Mar 21 23:47:34.579: INFO: PersistentVolumeClaim pvc-hdpbx found but phase is Pending instead of Bound. Mar 21 23:47:36.627: INFO: PersistentVolumeClaim pvc-hdpbx found and phase=Bound (2.082710182s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-wx8v7 Mar 21 23:49:50.341: INFO: Deleting pod "pvc-volume-tester-wx8v7" in namespace "csi-mock-volumes-4448" Mar 21 23:49:50.749: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wx8v7" to be fully deleted STEP: Deleting claim pvc-hdpbx Mar 21 23:50:57.015: INFO: Waiting up to 2m0s for PersistentVolume pvc-1ab5c5c7-eb6f-4106-8618-956c4b123618 to get deleted Mar 21 23:50:57.021: INFO: PersistentVolume pvc-1ab5c5c7-eb6f-4106-8618-956c4b123618 found and phase=Bound (5.956921ms) Mar 21 23:50:59.219: INFO: PersistentVolume pvc-1ab5c5c7-eb6f-4106-8618-956c4b123618 was removed STEP: Deleting storageclass csi-mock-volumes-4448-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4448 STEP: Waiting for namespaces [csi-mock-volumes-4448] to vanish STEP: uninstalling csi mock driver Mar 21 23:51:11.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-attacher Mar 21 23:51:11.700: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4448 Mar 21 23:51:11.770: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4448 Mar 21 23:51:11.840: INFO: deleting *v1.Role: csi-mock-volumes-4448-6479/external-attacher-cfg-csi-mock-volumes-4448 Mar 21 23:51:11.911: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-attacher-role-cfg Mar 21 23:51:12.065: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-provisioner Mar 21 23:51:12.135: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4448 Mar 21 23:51:12.212: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4448 Mar 21 23:51:12.283: INFO: deleting *v1.Role: csi-mock-volumes-4448-6479/external-provisioner-cfg-csi-mock-volumes-4448 Mar 21 23:51:12.336: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-provisioner-role-cfg Mar 21 23:51:12.368: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-resizer Mar 21 23:51:12.505: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4448 Mar 21 23:51:12.517: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4448 Mar 21 23:51:12.566: INFO: deleting *v1.Role: csi-mock-volumes-4448-6479/external-resizer-cfg-csi-mock-volumes-4448 Mar 21 23:51:12.797: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4448-6479/csi-resizer-role-cfg Mar 21 23:51:12.837: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-snapshotter Mar 21 23:51:13.041: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4448 Mar 21 23:51:13.423: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4448 Mar 21 23:51:13.564: INFO: deleting *v1.Role: csi-mock-volumes-4448-6479/external-snapshotter-leaderelection-csi-mock-volumes-4448 Mar 21 23:51:13.852: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4448-6479/external-snapshotter-leaderelection Mar 21 23:51:14.014: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4448-6479/csi-mock Mar 21 23:51:14.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4448 Mar 21 23:51:14.315: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4448 Mar 21 23:51:14.441: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4448 Mar 21 23:51:14.548: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4448 Mar 21 23:51:14.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4448 Mar 21 23:51:15.031: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4448 Mar 21 23:51:15.105: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4448 Mar 21 23:51:15.190: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4448-6479/csi-mockplugin Mar 21 23:51:15.236: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4448-6479/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4448-6479 STEP: Waiting for namespaces [csi-mock-volumes-4448-6479] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:51:59.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:279.723 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":133,"completed":15,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:51:59.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim with storage class STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:11.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2155" for this suite. • [SLOW TEST:11.984 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":133,"completed":16,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:11.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 21 23:52:12.006: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:12.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2653" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.403 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:12.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177" Mar 21 23:52:16.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177 && dd if=/dev/zero of=/tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177/file] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:16.517: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:16.747: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:16.747: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:16.930: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177 && chmod o+rwx /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:16.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:52:17.365: INFO: Creating a PV followed by a PVC Mar 21 23:52:17.417: INFO: Waiting for PV local-pvfw7kq to bind to PVC pvc-fb4sv Mar 21 23:52:17.417: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fb4sv] to have phase Bound Mar 21 23:52:17.449: INFO: PersistentVolumeClaim pvc-fb4sv found but phase is Pending instead of Bound. Mar 21 23:52:19.456: INFO: PersistentVolumeClaim pvc-fb4sv found and phase=Bound (2.038335839s) Mar 21 23:52:19.456: INFO: Waiting up to 3m0s for PersistentVolume local-pvfw7kq to have phase Bound Mar 21 23:52:19.555: INFO: PersistentVolume local-pvfw7kq found and phase=Bound (99.734248ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 21 23:52:25.751: INFO: pod "pod-fbcedd42-ef07-44fb-acb7-3d03ce8323d9" created on Node "latest-worker2" STEP: Writing in pod1 Mar 21 23:52:25.751: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-733 PodName:pod-fbcedd42-ef07-44fb-acb7-3d03ce8323d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:52:25.751: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:25.894: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 21 23:52:25.894: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-733 PodName:pod-fbcedd42-ef07-44fb-acb7-3d03ce8323d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:52:25.894: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:26.016: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 21 23:52:26.016: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-733 PodName:pod-fbcedd42-ef07-44fb-acb7-3d03ce8323d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:52:26.016: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:26.128: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fbcedd42-ef07-44fb-acb7-3d03ce8323d9 in namespace persistent-local-volumes-test-733 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:52:26.200: INFO: Deleting PersistentVolumeClaim "pvc-fb4sv" Mar 21 23:52:26.290: INFO: Deleting PersistentVolume "local-pvfw7kq" Mar 21 23:52:26.451: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:26.452: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:52:26.939: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:26.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177/file Mar 21 23:52:27.177: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:27.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177 Mar 21 23:52:27.275: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e81a34d3-990f-47ad-9aca-ffd9a3b90177] Namespace:persistent-local-volumes-test-733 PodName:hostexec-latest-worker2-mmmnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:52:27.275: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:27.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-733" for this suite. • [SLOW TEST:15.555 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":17,"skipped":847,"failed":0} SSS ------------------------------ [sig-storage] Volume limits should verify that all nodes have volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:27.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Mar 21 23:52:27.836: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:27.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-2228" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.316 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ [sig-storage] HostPath should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:27.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Mar 21 23:52:28.175: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8652" to be "Succeeded or Failed" Mar 21 23:52:28.293: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 117.731672ms Mar 21 23:52:30.367: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191949764s Mar 21 23:52:32.445: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270162993s Mar 21 23:52:34.506: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330910895s STEP: Saw pod success Mar 21 23:52:34.506: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 21 23:52:34.536: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 21 23:52:34.790: INFO: Waiting for pod pod-host-path-test to disappear Mar 21 23:52:34.828: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:34.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8652" for this suite. • [SLOW TEST:7.175 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":133,"completed":18,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:35.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-739 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:52:35.509: INFO: creating *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-attacher Mar 21 23:52:35.535: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-739 Mar 21 23:52:35.535: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-739 Mar 21 23:52:35.571: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-739 Mar 21 23:52:35.586: INFO: creating *v1.Role: csi-mock-volumes-739-8431/external-attacher-cfg-csi-mock-volumes-739 Mar 21 23:52:35.604: INFO: creating *v1.RoleBinding: csi-mock-volumes-739-8431/csi-attacher-role-cfg Mar 21 23:52:35.616: INFO: creating *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-provisioner Mar 21 23:52:35.667: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-739 Mar 21 23:52:35.667: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-739 Mar 21 23:52:35.674: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-739 Mar 21 23:52:35.680: INFO: creating *v1.Role: csi-mock-volumes-739-8431/external-provisioner-cfg-csi-mock-volumes-739 Mar 21 23:52:35.700: INFO: creating *v1.RoleBinding: csi-mock-volumes-739-8431/csi-provisioner-role-cfg Mar 21 23:52:35.732: INFO: creating *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-resizer Mar 21 23:52:35.751: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-739 Mar 21 23:52:35.751: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-739 Mar 21 23:52:35.793: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-739 Mar 21 23:52:35.799: INFO: creating *v1.Role: csi-mock-volumes-739-8431/external-resizer-cfg-csi-mock-volumes-739 Mar 21 23:52:35.805: INFO: creating *v1.RoleBinding: csi-mock-volumes-739-8431/csi-resizer-role-cfg Mar 21 23:52:35.827: INFO: creating *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-snapshotter Mar 21 23:52:35.873: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-739 Mar 21 23:52:35.873: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-739 Mar 21 23:52:35.927: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-739 Mar 21 23:52:35.943: INFO: creating *v1.Role: csi-mock-volumes-739-8431/external-snapshotter-leaderelection-csi-mock-volumes-739 Mar 21 23:52:35.978: INFO: creating *v1.RoleBinding: csi-mock-volumes-739-8431/external-snapshotter-leaderelection Mar 21 23:52:36.066: INFO: creating *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-mock Mar 21 23:52:36.081: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-739 Mar 21 23:52:36.101: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-739 Mar 21 23:52:36.112: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-739 Mar 21 23:52:36.118: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-739 Mar 21 23:52:36.124: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-739 Mar 21 23:52:36.143: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-739 Mar 21 23:52:36.154: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-739 Mar 21 23:52:36.206: INFO: creating *v1.StatefulSet: csi-mock-volumes-739-8431/csi-mockplugin Mar 21 23:52:36.230: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-739 Mar 21 23:52:36.260: INFO: creating *v1.StatefulSet: csi-mock-volumes-739-8431/csi-mockplugin-attacher Mar 21 23:52:36.338: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-739" Mar 21 23:52:36.358: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-739 to register on node latest-worker2 STEP: Creating pod Mar 21 23:52:51.404: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:53:13.644: FAIL: pod unexpectedly started to run Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 +0xad9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Deleting pod pvc-volume-tester-4bp25 Mar 21 23:53:13.645: INFO: Deleting pod "pvc-volume-tester-4bp25" in namespace "csi-mock-volumes-739" Mar 21 23:53:13.741: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4bp25" to be fully deleted STEP: Deleting claim pvc-whlwg Mar 21 23:53:56.014: INFO: Waiting up to 2m0s for PersistentVolume pvc-bf7249f8-061a-4a74-8ccf-093337f7e6b3 to get deleted Mar 21 23:53:56.099: INFO: PersistentVolume pvc-bf7249f8-061a-4a74-8ccf-093337f7e6b3 found and phase=Bound (85.165379ms) Mar 21 23:53:58.305: INFO: PersistentVolume pvc-bf7249f8-061a-4a74-8ccf-093337f7e6b3 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-739 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-739 STEP: Waiting for namespaces [csi-mock-volumes-739] to vanish STEP: uninstalling csi mock driver Mar 21 23:54:12.792: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-attacher Mar 21 23:54:12.875: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-739 Mar 21 23:54:13.010: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-739 Mar 21 23:54:13.040: INFO: deleting *v1.Role: csi-mock-volumes-739-8431/external-attacher-cfg-csi-mock-volumes-739 Mar 21 23:54:13.055: INFO: deleting *v1.RoleBinding: csi-mock-volumes-739-8431/csi-attacher-role-cfg Mar 21 23:54:13.110: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-provisioner Mar 21 23:54:13.167: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-739 Mar 21 23:54:13.221: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-739 Mar 21 23:54:13.239: INFO: deleting *v1.Role: csi-mock-volumes-739-8431/external-provisioner-cfg-csi-mock-volumes-739 Mar 21 23:54:13.305: INFO: deleting *v1.RoleBinding: csi-mock-volumes-739-8431/csi-provisioner-role-cfg Mar 21 23:54:13.326: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-resizer Mar 21 23:54:13.450: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-739 Mar 21 23:54:13.493: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-739 Mar 21 23:54:13.585: INFO: deleting *v1.Role: csi-mock-volumes-739-8431/external-resizer-cfg-csi-mock-volumes-739 Mar 21 23:54:13.616: INFO: deleting *v1.RoleBinding: csi-mock-volumes-739-8431/csi-resizer-role-cfg Mar 21 23:54:13.716: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-snapshotter Mar 21 23:54:13.756: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-739 Mar 21 23:54:13.861: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-739 Mar 21 23:54:13.893: INFO: deleting *v1.Role: csi-mock-volumes-739-8431/external-snapshotter-leaderelection-csi-mock-volumes-739 Mar 21 23:54:14.136: INFO: deleting *v1.RoleBinding: csi-mock-volumes-739-8431/external-snapshotter-leaderelection Mar 21 23:54:14.234: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-739-8431/csi-mock Mar 21 23:54:14.414: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-739 Mar 21 23:54:14.466: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-739 Mar 21 23:54:14.490: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-739 Mar 21 23:54:14.528: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-739 Mar 21 23:54:14.618: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-739 Mar 21 23:54:14.684: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-739 Mar 21 23:54:14.707: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-739 Mar 21 23:54:14.762: INFO: deleting *v1.StatefulSet: csi-mock-volumes-739-8431/csi-mockplugin Mar 21 23:54:14.786: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-739 Mar 21 23:54:14.902: INFO: deleting *v1.StatefulSet: csi-mock-volumes-739-8431/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-739-8431 STEP: Waiting for namespaces [csi-mock-volumes-739-8431] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:55:01.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [145.975 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 21 23:53:13.644: pod unexpectedly started to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":133,"completed":18,"skipped":910,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:55:01.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-92kf STEP: Failing liveness probe Mar 21 23:55:09.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=subpath-5347 exec pod-subpath-test-configmap-92kf --container test-container-volume-configmap-92kf -- /bin/sh -c rm /probe-volume/probe-file' Mar 21 23:55:14.648: INFO: stderr: "" Mar 21 23:55:14.648: INFO: stdout: "" Mar 21 23:55:14.648: INFO: Pod exec output: STEP: Waiting for container to restart Mar 21 23:55:14.717: INFO: Container test-container-subpath-configmap-92kf, restarts: 0 Mar 21 23:55:24.721: INFO: Container test-container-subpath-configmap-92kf, restarts: 1 Mar 21 23:55:24.721: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Mar 21 23:55:32.864: INFO: Container has restart count: 2 Mar 21 23:55:52.853: INFO: Container has restart count: 3 Mar 21 23:56:54.858: INFO: Container restart has stabilized Mar 21 23:56:54.858: INFO: Deleting pod "pod-subpath-test-configmap-92kf" in namespace "subpath-5347" Mar 21 23:56:54.969: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-92kf" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:17.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5347" for this suite. • [SLOW TEST:137.090 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":133,"completed":19,"skipped":936,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Flexvolumes should be mountable when non-attachable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:18.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Mar 21 23:57:19.201: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:19.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-9140" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.152 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:19.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442" Mar 21 23:57:28.340: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442 && dd if=/dev/zero of=/tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442/file] Namespace:persistent-local-volumes-test-5874 PodName:hostexec-latest-worker2-x4xjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:57:28.341: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:57:28.546: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5874 PodName:hostexec-latest-worker2-x4xjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:57:28.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 21 23:57:28.698: INFO: Creating a PV followed by a PVC Mar 21 23:57:28.733: INFO: Waiting for PV local-pvvt5ql to bind to PVC pvc-9gf8d Mar 21 23:57:28.733: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-9gf8d] to have phase Bound Mar 21 23:57:28.798: INFO: PersistentVolumeClaim pvc-9gf8d found but phase is Pending instead of Bound. Mar 21 23:57:30.880: INFO: PersistentVolumeClaim pvc-9gf8d found and phase=Bound (2.146985998s) Mar 21 23:57:30.880: INFO: Waiting up to 3m0s for PersistentVolume local-pvvt5ql to have phase Bound Mar 21 23:57:30.883: INFO: PersistentVolume local-pvvt5ql found and phase=Bound (2.442487ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 21 23:57:37.580: INFO: pod "pod-439dd0a3-3f2d-48f7-a70f-fe0ee78c1bc2" created on Node "latest-worker2" STEP: Writing in pod1 Mar 21 23:57:37.580: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5874 PodName:pod-439dd0a3-3f2d-48f7-a70f-fe0ee78c1bc2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:57:37.580: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:57:37.776: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000066 seconds, 266.3KB/s", err: Mar 21 23:57:37.776: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-5874 PodName:pod-439dd0a3-3f2d-48f7-a70f-fe0ee78c1bc2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:57:37.776: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:57:37.898: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-439dd0a3-3f2d-48f7-a70f-fe0ee78c1bc2 in namespace persistent-local-volumes-test-5874 STEP: Creating pod2 STEP: Creating a pod Mar 21 23:57:42.595: INFO: pod "pod-9cc43754-eb2d-41c0-9241-a751ec8d394d" created on Node "latest-worker2" STEP: Reading in pod2 Mar 21 23:57:42.595: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-5874 PodName:pod-9cc43754-eb2d-41c0-9241-a751ec8d394d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:57:42.595: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:57:42.710: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-9cc43754-eb2d-41c0-9241-a751ec8d394d in namespace persistent-local-volumes-test-5874 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 21 23:57:42.763: INFO: Deleting PersistentVolumeClaim "pvc-9gf8d" Mar 21 23:57:42.845: INFO: Deleting PersistentVolume "local-pvvt5ql" Mar 21 23:57:42.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5874 PodName:hostexec-latest-worker2-x4xjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:57:42.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442/file Mar 21 23:57:43.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5874 PodName:hostexec-latest-worker2-x4xjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:57:43.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442 Mar 21 23:57:43.244: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ecd7a87b-f5b8-4ef3-9fe4-5b58c5500442] Namespace:persistent-local-volumes-test-5874 PodName:hostexec-latest-worker2-x4xjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:57:43.244: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:43.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5874" for this suite. • [SLOW TEST:24.157 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":20,"skipped":981,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Mounted volume expand Should verify mounted devices can be resized /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:43.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:59 Mar 21 23:57:43.757: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:43.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-2510" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:105 Mar 21 23:57:43.912: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.361 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:60 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:43.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Mar 21 23:57:44.162: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7089" to be "Succeeded or Failed" Mar 21 23:57:44.216: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.838275ms Mar 21 23:57:46.359: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196975464s Mar 21 23:57:48.737: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575757777s Mar 21 23:57:50.747: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585032009s STEP: Saw pod success Mar 21 23:57:50.747: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 21 23:57:50.778: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 21 23:57:50.963: INFO: Waiting for pod pod-host-path-test to disappear Mar 21 23:57:51.006: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:51.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7089" for this suite. • [SLOW TEST:7.172 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":133,"completed":21,"skipped":1028,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:51.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Mar 21 23:57:51.374: INFO: Waiting up to 5m0s for pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e" in namespace "emptydir-7934" to be "Succeeded or Failed" Mar 21 23:57:51.515: INFO: Pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 140.994355ms Mar 21 23:57:53.739: INFO: Pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365635178s Mar 21 23:57:55.870: INFO: Pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495924907s Mar 21 23:57:58.023: INFO: Pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649662631s STEP: Saw pod success Mar 21 23:57:58.023: INFO: Pod "pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e" satisfied condition "Succeeded or Failed" Mar 21 23:57:58.240: INFO: Trying to get logs from node latest-worker pod pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e container test-container: STEP: delete the pod Mar 21 23:57:58.420: INFO: Waiting for pod pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e to disappear Mar 21 23:57:58.450: INFO: Pod pod-4d95fbd0-7f69-4a22-9fc3-e920cbbd8c0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:57:58.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7934" for this suite. • [SLOW TEST:7.445 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":133,"completed":22,"skipped":1041,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:57:58.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-2898 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 21 23:57:58.993: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-attacher Mar 21 23:57:59.042: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2898 Mar 21 23:57:59.042: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2898 Mar 21 23:57:59.046: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2898 Mar 21 23:57:59.074: INFO: creating *v1.Role: csi-mock-volumes-2898-5235/external-attacher-cfg-csi-mock-volumes-2898 Mar 21 23:57:59.181: INFO: creating *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-attacher-role-cfg Mar 21 23:57:59.189: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-provisioner Mar 21 23:57:59.229: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2898 Mar 21 23:57:59.229: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2898 Mar 21 23:57:59.260: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2898 Mar 21 23:57:59.377: INFO: creating *v1.Role: csi-mock-volumes-2898-5235/external-provisioner-cfg-csi-mock-volumes-2898 Mar 21 23:57:59.415: INFO: creating *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-provisioner-role-cfg Mar 21 23:57:59.445: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-resizer Mar 21 23:57:59.470: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2898 Mar 21 23:57:59.470: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2898 Mar 21 23:57:59.504: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2898 Mar 21 23:57:59.525: INFO: creating *v1.Role: csi-mock-volumes-2898-5235/external-resizer-cfg-csi-mock-volumes-2898 Mar 21 23:57:59.561: INFO: creating *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-resizer-role-cfg Mar 21 23:57:59.572: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-snapshotter Mar 21 23:57:59.578: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2898 Mar 21 23:57:59.578: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2898 Mar 21 23:57:59.584: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2898 Mar 21 23:57:59.666: INFO: creating *v1.Role: csi-mock-volumes-2898-5235/external-snapshotter-leaderelection-csi-mock-volumes-2898 Mar 21 23:57:59.712: INFO: creating *v1.RoleBinding: csi-mock-volumes-2898-5235/external-snapshotter-leaderelection Mar 21 23:57:59.733: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-mock Mar 21 23:57:59.747: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2898 Mar 21 23:57:59.810: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2898 Mar 21 23:57:59.842: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2898 Mar 21 23:57:59.902: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2898 Mar 21 23:58:00.007: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2898 Mar 21 23:58:00.076: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2898 Mar 21 23:58:00.173: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2898 Mar 21 23:58:00.208: INFO: creating *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin Mar 21 23:58:00.282: INFO: creating *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin-attacher Mar 21 23:58:00.303: INFO: creating *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin-resizer Mar 21 23:58:00.352: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2898 to register on node latest-worker2 STEP: Creating pod Mar 21 23:58:10.616: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 21 23:58:10.722: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-j2tql] to have phase Bound Mar 21 23:58:10.810: INFO: PersistentVolumeClaim pvc-j2tql found but phase is Pending instead of Bound. Mar 21 23:58:12.863: INFO: PersistentVolumeClaim pvc-j2tql found and phase=Bound (2.141552502s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-db58h Mar 21 23:58:37.877: INFO: Deleting pod "pvc-volume-tester-db58h" in namespace "csi-mock-volumes-2898" Mar 21 23:58:38.349: INFO: Wait up to 5m0s for pod "pvc-volume-tester-db58h" to be fully deleted STEP: Deleting claim pvc-j2tql Mar 21 23:59:05.698: INFO: Waiting up to 2m0s for PersistentVolume pvc-92814604-823a-49e3-bb2b-277b2cb11d6e to get deleted Mar 21 23:59:05.799: INFO: PersistentVolume pvc-92814604-823a-49e3-bb2b-277b2cb11d6e found and phase=Bound (101.481947ms) Mar 21 23:59:07.853: INFO: PersistentVolume pvc-92814604-823a-49e3-bb2b-277b2cb11d6e was removed STEP: Deleting storageclass csi-mock-volumes-2898-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2898 STEP: Waiting for namespaces [csi-mock-volumes-2898] to vanish STEP: uninstalling csi mock driver Mar 21 23:59:18.602: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-attacher Mar 21 23:59:18.684: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2898 Mar 21 23:59:18.808: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2898 Mar 21 23:59:18.980: INFO: deleting *v1.Role: csi-mock-volumes-2898-5235/external-attacher-cfg-csi-mock-volumes-2898 Mar 21 23:59:19.056: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-attacher-role-cfg Mar 21 23:59:19.118: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-provisioner Mar 21 23:59:19.157: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2898 Mar 21 23:59:19.254: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2898 Mar 21 23:59:19.450: INFO: deleting *v1.Role: csi-mock-volumes-2898-5235/external-provisioner-cfg-csi-mock-volumes-2898 Mar 21 23:59:19.459: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-provisioner-role-cfg Mar 21 23:59:20.296: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-resizer Mar 21 23:59:20.433: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2898 Mar 21 23:59:20.704: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2898 Mar 21 23:59:20.868: INFO: deleting *v1.Role: csi-mock-volumes-2898-5235/external-resizer-cfg-csi-mock-volumes-2898 Mar 21 23:59:21.079: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2898-5235/csi-resizer-role-cfg Mar 21 23:59:21.203: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-snapshotter Mar 21 23:59:21.309: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2898 Mar 21 23:59:21.385: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2898 Mar 21 23:59:21.452: INFO: deleting *v1.Role: csi-mock-volumes-2898-5235/external-snapshotter-leaderelection-csi-mock-volumes-2898 Mar 21 23:59:21.488: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2898-5235/external-snapshotter-leaderelection Mar 21 23:59:21.589: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2898-5235/csi-mock Mar 21 23:59:21.630: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2898 Mar 21 23:59:21.683: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2898 Mar 21 23:59:21.760: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2898 Mar 21 23:59:21.777: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2898 Mar 21 23:59:21.817: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2898 Mar 21 23:59:21.917: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2898 Mar 21 23:59:21.942: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2898 Mar 21 23:59:21.992: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin Mar 21 23:59:22.055: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin-attacher Mar 21 23:59:22.156: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2898-5235/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-2898-5235 STEP: Waiting for namespaces [csi-mock-volumes-2898-5235] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:12.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:134.313 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":133,"completed":23,"skipped":1109,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:12.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Mar 22 00:00:13.333: INFO: Waiting up to 5m0s for pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398" in namespace "projected-8037" to be "Succeeded or Failed" Mar 22 00:00:13.369: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398": Phase="Pending", Reason="", readiness=false. Elapsed: 36.340342ms Mar 22 00:00:15.391: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057627684s Mar 22 00:00:17.997: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664358536s Mar 22 00:00:20.525: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398": Phase="Pending", Reason="", readiness=false. Elapsed: 7.191627702s Mar 22 00:00:22.539: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.20631776s STEP: Saw pod success Mar 22 00:00:22.540: INFO: Pod "metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398" satisfied condition "Succeeded or Failed" Mar 22 00:00:22.560: INFO: Trying to get logs from node latest-worker pod metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398 container client-container: STEP: delete the pod Mar 22 00:00:22.788: INFO: Waiting for pod metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398 to disappear Mar 22 00:00:22.847: INFO: Pod metadata-volume-babb3c3b-9695-48b8-9c05-3cc4b7925398 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:22.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8037" for this suite. • [SLOW TEST:10.110 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":24,"skipped":1137,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:22.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 22 00:00:29.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-917ce40f-ff18-4477-85d6-3547a1c7cf3f] Namespace:persistent-local-volumes-test-9501 PodName:hostexec-latest-worker2-nv7hq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:00:29.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:00:29.508: INFO: Creating a PV followed by a PVC Mar 22 00:00:29.531: INFO: Waiting for PV local-pvgbhtt to bind to PVC pvc-4tmns Mar 22 00:00:29.532: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4tmns] to have phase Bound Mar 22 00:00:29.598: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:31.704: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:33.722: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:35.732: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:37.766: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:39.836: INFO: PersistentVolumeClaim pvc-4tmns found but phase is Pending instead of Bound. Mar 22 00:00:41.857: INFO: PersistentVolumeClaim pvc-4tmns found and phase=Bound (12.32547785s) Mar 22 00:00:41.857: INFO: Waiting up to 3m0s for PersistentVolume local-pvgbhtt to have phase Bound Mar 22 00:00:41.944: INFO: PersistentVolume local-pvgbhtt found and phase=Bound (86.401355ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 22 00:00:42.025: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ea0506e6-5253-424f-bbf8-11a422fad0dd] Namespace:persistent-local-volumes-test-9501 PodName:hostexec-latest-worker2-nv7hq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:00:42.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:00:42.161: INFO: Creating a PV followed by a PVC Mar 22 00:00:42.247: INFO: Waiting for PV local-pvmtlp7 to bind to PVC pvc-drfz9 Mar 22 00:00:42.247: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-drfz9] to have phase Bound Mar 22 00:00:42.282: INFO: PersistentVolumeClaim pvc-drfz9 found but phase is Pending instead of Bound. Mar 22 00:00:44.339: INFO: PersistentVolumeClaim pvc-drfz9 found and phase=Bound (2.092684564s) Mar 22 00:00:44.340: INFO: Waiting up to 3m0s for PersistentVolume local-pvmtlp7 to have phase Bound Mar 22 00:00:44.363: INFO: PersistentVolume local-pvmtlp7 found and phase=Bound (23.863902ms) Mar 22 00:00:44.405: INFO: Waiting up to 5m0s for pod "pod-ce432dde-94fe-47c7-8c89-1ceeefadca9d" in namespace "persistent-local-volumes-test-9501" to be "Unschedulable" Mar 22 00:00:44.489: INFO: Pod "pod-ce432dde-94fe-47c7-8c89-1ceeefadca9d": Phase="Pending", Reason="", readiness=false. Elapsed: 83.222913ms Mar 22 00:00:44.489: INFO: Pod "pod-ce432dde-94fe-47c7-8c89-1ceeefadca9d" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 22 00:00:44.489: INFO: Deleting PersistentVolumeClaim "pvc-4tmns" Mar 22 00:00:44.557: INFO: Deleting PersistentVolume "local-pvgbhtt" STEP: Removing the test directory Mar 22 00:00:44.585: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-917ce40f-ff18-4477-85d6-3547a1c7cf3f] Namespace:persistent-local-volumes-test-9501 PodName:hostexec-latest-worker2-nv7hq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:00:44.585: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:45.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9501" for this suite. • [SLOW TEST:22.314 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":133,"completed":25,"skipped":1197,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:45.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d" Mar 22 00:00:49.756: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d" "/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d"] Namespace:persistent-local-volumes-test-9862 PodName:hostexec-latest-worker-tj6kq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:00:49.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:00:49.908: INFO: Creating a PV followed by a PVC Mar 22 00:00:49.959: INFO: Waiting for PV local-pvzwrqk to bind to PVC pvc-gd765 Mar 22 00:00:49.960: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-gd765] to have phase Bound Mar 22 00:00:50.053: INFO: PersistentVolumeClaim pvc-gd765 found but phase is Pending instead of Bound. Mar 22 00:00:52.070: INFO: PersistentVolumeClaim pvc-gd765 found but phase is Pending instead of Bound. Mar 22 00:00:54.213: INFO: PersistentVolumeClaim pvc-gd765 found but phase is Pending instead of Bound. Mar 22 00:00:56.435: INFO: PersistentVolumeClaim pvc-gd765 found but phase is Pending instead of Bound. Mar 22 00:00:58.570: INFO: PersistentVolumeClaim pvc-gd765 found and phase=Bound (8.610399765s) Mar 22 00:00:58.570: INFO: Waiting up to 3m0s for PersistentVolume local-pvzwrqk to have phase Bound Mar 22 00:00:58.830: INFO: PersistentVolume local-pvzwrqk found and phase=Bound (259.751895ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:01:10.837: INFO: pod "pod-ba832d5a-9a41-43a9-a286-7d3037f29f00" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:01:10.837: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9862 PodName:pod-ba832d5a-9a41-43a9-a286-7d3037f29f00 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:10.837: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:11.006: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:01:11.007: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9862 PodName:pod-ba832d5a-9a41-43a9-a286-7d3037f29f00 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:11.007: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:11.080: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:01:11.080: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9862 PodName:pod-ba832d5a-9a41-43a9-a286-7d3037f29f00 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:11.080: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:11.192: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-ba832d5a-9a41-43a9-a286-7d3037f29f00 in namespace persistent-local-volumes-test-9862 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:01:11.200: INFO: Deleting PersistentVolumeClaim "pvc-gd765" Mar 22 00:01:11.290: INFO: Deleting PersistentVolume "local-pvzwrqk" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d" Mar 22 00:01:11.317: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d"] Namespace:persistent-local-volumes-test-9862 PodName:hostexec-latest-worker-tj6kq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:01:11.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 22 00:01:11.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0fad7dc3-078d-40e9-b885-2c7b1c30c46d] Namespace:persistent-local-volumes-test-9862 PodName:hostexec-latest-worker-tj6kq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:01:11.555: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:01:11.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9862" for this suite. • [SLOW TEST:26.512 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":26,"skipped":1229,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:01:11.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce" Mar 22 00:01:19.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce" "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce"] Namespace:persistent-local-volumes-test-5462 PodName:hostexec-latest-worker-2ncn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:01:19.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:01:19.675: INFO: Creating a PV followed by a PVC Mar 22 00:01:20.148: INFO: Waiting for PV local-pvvn24l to bind to PVC pvc-tl5r9 Mar 22 00:01:20.148: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tl5r9] to have phase Bound Mar 22 00:01:20.244: INFO: PersistentVolumeClaim pvc-tl5r9 found but phase is Pending instead of Bound. Mar 22 00:01:22.497: INFO: PersistentVolumeClaim pvc-tl5r9 found but phase is Pending instead of Bound. Mar 22 00:01:24.600: INFO: PersistentVolumeClaim pvc-tl5r9 found but phase is Pending instead of Bound. Mar 22 00:01:26.694: INFO: PersistentVolumeClaim pvc-tl5r9 found but phase is Pending instead of Bound. Mar 22 00:01:28.724: INFO: PersistentVolumeClaim pvc-tl5r9 found and phase=Bound (8.576020552s) Mar 22 00:01:28.724: INFO: Waiting up to 3m0s for PersistentVolume local-pvvn24l to have phase Bound Mar 22 00:01:28.808: INFO: PersistentVolume local-pvvn24l found and phase=Bound (83.629011ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:01:35.412: INFO: pod "pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:01:35.412: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:35.412: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:35.610: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:01:35.610: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:35.610: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:35.775: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:01:42.634: INFO: pod "pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3" created on Node "latest-worker" Mar 22 00:01:42.634: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:42.634: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:42.732: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:01:42.733: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:42.733: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:43.347: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:01:43.347: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5462 PodName:pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:01:43.347: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:44.401: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 in namespace persistent-local-volumes-test-5462 STEP: Deleting pod2 STEP: Deleting pod pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 in namespace persistent-local-volumes-test-5462 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:01:45.046: INFO: Deleting PersistentVolumeClaim "pvc-tl5r9" Mar 22 00:01:45.292: INFO: Deleting PersistentVolume "local-pvvn24l" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce" Mar 22 00:01:46.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce"] Namespace:persistent-local-volumes-test-5462 PodName:hostexec-latest-worker-2ncn4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:01:46.023: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:01:47.004: INFO: exec latest-worker: command: umount "/tmp/local-volume-test-6053b7ce-ac94-4c73-8eb1-8a488e992cce" Mar 22 00:01:47.004: INFO: exec latest-worker: stdout: "" Mar 22 00:01:47.004: INFO: exec latest-worker: stderr: "" Mar 22 00:01:47.004: INFO: exec latest-worker: exit code: 0 Mar 22 00:01:47.004: FAIL: Unexpected error: <*errors.errorString | 0xc00409a220>: { s: "Internal error occurred: error executing command in container: failed to exec in container: failed to start exec \"97a7d08015b90eaca0d6f1e348c609ae157ebfbb29200b8224839567d7758584\": OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown", } Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "97a7d08015b90eaca0d6f1e348c609ae157ebfbb29200b8224839567d7758584": OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeTmpfs(0xc004231dd0, 0xc000b2f000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:114 +0x1d9 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0xc004231dd0, 0xc000b2f000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:333 +0x125 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0030fc990, 0xc000f9af48, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:867 +0x82 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 +0x65 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-5462". STEP: Found 15 events. Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:13 +0000 UTC - event for hostexec-latest-worker-2ncn4: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5462/hostexec-latest-worker-2ncn4 to latest-worker Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:14 +0000 UTC - event for hostexec-latest-worker-2ncn4: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:18 +0000 UTC - event for hostexec-latest-worker-2ncn4: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:18 +0000 UTC - event for hostexec-latest-worker-2ncn4: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:20 +0000 UTC - event for pvc-tl5r9: {persistentvolume-controller } ProvisioningFailed: no volume plugin matched name: kubernetes.io/no-provisioner Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:29 +0000 UTC - event for pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5462/pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 to latest-worker Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:32 +0000 UTC - event for pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29" already present on machine Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:34 +0000 UTC - event for pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4: {kubelet latest-worker} Created: Created container write-pod Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:34 +0000 UTC - event for pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4: {kubelet latest-worker} Started: Started container write-pod Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:35 +0000 UTC - event for pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5462/pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 to latest-worker Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:38 +0000 UTC - event for pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29" already present on machine Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:39 +0000 UTC - event for pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3: {kubelet latest-worker} Created: Created container write-pod Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:39 +0000 UTC - event for pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3: {kubelet latest-worker} Started: Started container write-pod Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:44 +0000 UTC - event for pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4: {kubelet latest-worker} Killing: Stopping container write-pod Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3: {kubelet latest-worker} Killing: Stopping container write-pod Mar 22 00:01:48.339: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:01:48.339: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:29 +0000 UTC }] Mar 22 00:01:48.339: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:01:35 +0000 UTC }] Mar 22 00:01:48.339: INFO: Mar 22 00:01:48.587: INFO: Logging node info for node latest-control-plane Mar 22 00:01:49.453: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:49.454: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:01:49.852: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:01:51.001: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container etcd ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:51.001: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container coredns ready: false, restart count 0 Mar 22 00:01:51.001: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:51.001: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container coredns ready: false, restart count 0 Mar 22 00:01:51.001: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-apiserver ready: true, restart count 0 W0322 00:01:51.796063 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:52.086: INFO: Latency metrics for node latest-control-plane Mar 22 00:01:52.086: INFO: Logging node info for node latest-worker Mar 22 00:01:53.278: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:53.279: INFO: Logging kubelet events for node latest-worker Mar 22 00:01:53.547: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:01:53.969: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:53.969: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container pause ready: false, restart count 0 Mar 22 00:01:53.969: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:53.969: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:01:53.969: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:01:53.969: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:53.969: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:01:53.969: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:53.969: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.969: INFO: Container token-test ready: false, restart count 0 W0322 00:01:54.260352 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:54.743: INFO: Latency metrics for node latest-worker Mar 22 00:01:54.743: INFO: Logging node info for node latest-worker2 Mar 22 00:01:54.824: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:54.826: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:01:55.110: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:01:55.460: INFO: pod-service-account-mountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.460: INFO: taint-eviction-a1 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container pause ready: false, restart count 0 Mar 22 00:01:55.460: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:55.460: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:55.460: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:01:55.460: INFO: pod-service-account-defaultsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.460: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:01:55.460: INFO: pod-service-account-nomountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.460: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.460: INFO: coredns-74ff55c5b-q4csd started at 2021-03-21 23:57:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:55.460: INFO: pod-service-account-mountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.460: INFO: Container token-test ready: false, restart count 0 W0322 00:01:55.895931 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:56.930: INFO: Latency metrics for node latest-worker2 Mar 22 00:01:56.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5462" for this suite. • Failure in Spec Teardown (AfterEach) [45.675 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [AfterEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Mar 22 00:01:47.004: Unexpected error: <*errors.errorString | 0xc00409a220>: { s: "Internal error occurred: error executing command in container: failed to exec in container: failed to start exec \"97a7d08015b90eaca0d6f1e348c609ae157ebfbb29200b8224839567d7758584\": OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown", } Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "97a7d08015b90eaca0d6f1e348c609ae157ebfbb29200b8224839567d7758584": OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:114 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":26,"skipped":1283,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:01:57.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 Mar 22 00:01:58.496: FAIL: Unexpected error: <*errors.errorString | 0xc00298ed30>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func21.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 +0xac k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-8744". STEP: Found 0 events. Mar 22 00:01:58.679: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:01:58.679: INFO: Mar 22 00:01:58.724: INFO: Logging node info for node latest-control-plane Mar 22 00:01:58.768: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:58.769: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:01:58.827: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:01:58.860: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.860: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:58.861: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:58.861: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:01:58.861: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:01:58.861: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:01:58.861: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:01:58.861: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container etcd ready: true, restart count 0 Mar 22 00:01:58.861: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:58.861: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:58.861: INFO: Container coredns ready: true, restart count 0 W0322 00:01:58.914938 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:59.092: INFO: Latency metrics for node latest-control-plane Mar 22 00:01:59.092: INFO: Logging node info for node latest-worker Mar 22 00:01:59.112: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:59.113: INFO: Logging kubelet events for node latest-worker Mar 22 00:01:59.135: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:01:59.165: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.165: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:01:59.165: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:01:59.165: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:01:59.165: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:59.165: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:01:59.165: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.165: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:59.165: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.165: INFO: Container pause ready: true, restart count 0 W0322 00:01:59.198648 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:59.460: INFO: Latency metrics for node latest-worker Mar 22 00:01:59.460: INFO: Logging node info for node latest-worker2 Mar 22 00:01:59.537: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:59.538: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:01:59.571: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:01:59.621: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container chaos-mesh ready: false, restart count 0 Mar 22 00:01:59.621: INFO: pod-service-account-nomountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.621: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.621: INFO: coredns-74ff55c5b-q4csd started at 2021-03-21 23:57:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container coredns ready: false, restart count 0 Mar 22 00:01:59.621: INFO: pod-service-account-mountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.621: INFO: pod-service-account-mountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:59.621: INFO: taint-eviction-a1 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container pause ready: false, restart count 0 Mar 22 00:01:59.621: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:01:59.621: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:59.621: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:01:59.621: INFO: pod-service-account-defaultsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:59.621: INFO: Container token-test ready: false, restart count 0 W0322 00:01:59.677474 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:00.025: INFO: Latency metrics for node latest-worker2 Mar 22 00:02:00.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8744" for this suite. • Failure in Spec Setup (BeforeEach) [2.737 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 Mar 22 00:01:58.496: Unexpected error: <*errors.errorString | 0xc00298ed30>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":133,"completed":26,"skipped":1299,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} SSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:00.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3545 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:02:01.383: FAIL: Unexpected error: <*errors.errorString | 0xc000ff09b0>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/drivers.(*mockCSIDriver).PrepareTest(0xc004bbc000, 0xc001181080, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 +0x198 k8s.io/kubernetes/test/e2e/storage.glob..func1.1(0x0, 0x0, 0x1, 0xc0009370b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:181 +0x31a k8s.io/kubernetes/test/e2e/storage.glob..func1.8.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "csi-mock-volumes-3545". STEP: Found 0 events. Mar 22 00:02:01.437: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:01.437: INFO: Mar 22 00:02:01.490: INFO: Logging node info for node latest-control-plane Mar 22 00:02:01.573: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:01.574: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:01.695: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:01.830: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:01.830: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:01.830: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:01.830: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:01.830: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:01.830: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:01.830: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:02:01.830: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:01.830: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:01.830: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:02:01.895099 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:02.231: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:02.231: INFO: Logging node info for node latest-worker Mar 22 00:02:02.350: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:02.351: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:02.423: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:02.496: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:02.496: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:02.496: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:02.496: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:02.496: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:02.496: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:02.496: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:02.496: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:02.496: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:02.496: INFO: Container token-test ready: false, restart count 0 W0322 00:02:02.528571 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:02.840: INFO: Latency metrics for node latest-worker Mar 22 00:02:02.840: INFO: Logging node info for node latest-worker2 Mar 22 00:02:02.879: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:02.880: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:02.926: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:03.115: INFO: pod-service-account-mountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:03.115: INFO: taint-eviction-a1 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container pause ready: false, restart count 0 Mar 22 00:02:03.115: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:03.115: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:03.115: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:03.115: INFO: pod-service-account-defaultsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:03.115: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container chaos-mesh ready: false, restart count 0 Mar 22 00:02:03.115: INFO: pod-service-account-nomountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:03.115: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:03.115: INFO: coredns-74ff55c5b-q4csd started at 2021-03-21 23:57:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:03.115: INFO: pod-service-account-mountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:03.115: INFO: Container token-test ready: false, restart count 0 W0322 00:02:03.300246 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:03.773: INFO: Latency metrics for node latest-worker2 STEP: Collecting events from namespace "csi-mock-volumes-3545-6470". STEP: Found 0 events. Mar 22 00:02:04.041: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:04.041: INFO: Mar 22 00:02:04.118: INFO: Logging node info for node latest-control-plane Mar 22 00:02:04.197: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:04.198: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:04.267: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:04.333: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:04.333: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:04.333: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:04.333: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:04.333: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:04.333: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:04.333: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:02:04.333: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:04.333: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.333: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:02:04.486352 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:04.651: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:04.651: INFO: Logging node info for node latest-worker Mar 22 00:02:04.754: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:04.755: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:04.812: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:04.933: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:04.933: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:04.933: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:04.933: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:04.933: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:04.933: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:04.933: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:04.933: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:04.933: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:04.933: INFO: Container coredns ready: false, restart count 0 W0322 00:02:05.030550 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:05.376: INFO: Latency metrics for node latest-worker Mar 22 00:02:05.376: INFO: Logging node info for node latest-worker2 Mar 22 00:02:05.467: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:05.468: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:05.542: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:05.609: INFO: coredns-74ff55c5b-q4csd started at 2021-03-21 23:57:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:05.609: INFO: pod-service-account-mountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:05.609: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:05.609: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:05.609: INFO: pod-service-account-defaultsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:05.609: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container chaos-mesh ready: false, restart count 0 Mar 22 00:02:05.609: INFO: pod-service-account-nomountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:05.609: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:05.609: INFO: Container token-test ready: false, restart count 0 W0322 00:02:05.662570 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:06.015: INFO: Latency metrics for node latest-worker2 Mar 22 00:02:06.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-3545" for this suite. STEP: Destroying namespace "csi-mock-volumes-3545-6470" for this suite. • Failure [5.924 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 Mar 22 00:02:01.383: Unexpected error: <*errors.errorString | 0xc000ff09b0>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":133,"completed":26,"skipped":1311,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:06.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-345 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:02:06.701: FAIL: Unexpected error: <*errors.errorString | 0xc0036aa620>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/drivers.(*mockCSIDriver).PrepareTest(0xc002b5e600, 0xc001181080, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 +0x198 k8s.io/kubernetes/test/e2e/storage.glob..func1.1(0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:181 +0x31a k8s.io/kubernetes/test/e2e/storage.glob..func1.17.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1461 +0x175 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "csi-mock-volumes-345". STEP: Found 0 events. Mar 22 00:02:06.797: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:06.797: INFO: Mar 22 00:02:06.827: INFO: Logging node info for node latest-control-plane Mar 22 00:02:06.874: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:06.875: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:06.978: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:07.032: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:07.032: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:07.032: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:07.032: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:07.032: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:07.032: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:07.032: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:02:07.032: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:07.032: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.032: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:02:07.098417 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:07.296: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:07.296: INFO: Logging node info for node latest-worker Mar 22 00:02:07.365: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:07.366: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:07.428: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:07.483: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:07.483: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:07.483: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:07.483: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:07.483: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:07.483: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:07.483: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:07.483: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:07.483: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.483: INFO: Container token-test ready: false, restart count 0 W0322 00:02:07.514398 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:07.814: INFO: Latency metrics for node latest-worker Mar 22 00:02:07.814: INFO: Logging node info for node latest-worker2 Mar 22 00:02:07.857: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:07.858: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:07.903: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:07.927: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:07.927: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:02:07.955893 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:08.177: INFO: Latency metrics for node latest-worker2 STEP: Collecting events from namespace "csi-mock-volumes-345-9186". STEP: Found 0 events. Mar 22 00:02:08.237: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:08.237: INFO: Mar 22 00:02:08.252: INFO: Logging node info for node latest-control-plane Mar 22 00:02:08.306: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:08.306: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:08.348: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:08.374: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:02:08.374: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:08.374: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:02:08.374: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:08.374: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:08.374: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:08.374: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:08.374: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:08.374: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.374: INFO: Container coredns ready: true, restart count 0 W0322 00:02:08.451601 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:08.664: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:08.664: INFO: Logging node info for node latest-worker Mar 22 00:02:08.687: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:08.687: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:08.734: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:08.743: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:08.743: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:08.743: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:08.743: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:08.743: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:08.743: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:08.743: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:08.743: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:08.743: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:08.743: INFO: Container chaos-daemon ready: false, restart count 0 W0322 00:02:08.813222 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:09.125: INFO: Latency metrics for node latest-worker Mar 22 00:02:09.125: INFO: Logging node info for node latest-worker2 Mar 22 00:02:09.163: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:09.165: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:09.220: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:09.248: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:09.248: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:02:09.284379 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:09.623: INFO: Latency metrics for node latest-worker2 Mar 22 00:02:09.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "csi-mock-volumes-345" for this suite. STEP: Destroying namespace "csi-mock-volumes-345-9186" for this suite. • Failure [3.585 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=default [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 Mar 22 00:02:06.701: Unexpected error: <*errors.errorString | 0xc0036aa620>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":133,"completed":26,"skipped":1332,"failed":5,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default"]} SSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:09.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 Mar 22 00:02:10.014: FAIL: Unexpected error: <*errors.errorString | 0xc001c4ab60>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func21.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 +0xac k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-4667". STEP: Found 0 events. Mar 22 00:02:10.130: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:10.130: INFO: Mar 22 00:02:10.176: INFO: Logging node info for node latest-control-plane Mar 22 00:02:10.217: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:10.217: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:10.265: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:10.301: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:10.301: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:10.301: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:10.301: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:10.301: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:02:10.301: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:10.301: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:10.301: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:10.301: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.301: INFO: Container kindnet-cni ready: true, restart count 0 W0322 00:02:10.351818 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:10.515: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:10.515: INFO: Logging node info for node latest-worker Mar 22 00:02:10.534: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:10.535: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:10.621: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:10.673: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:10.673: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:10.673: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container coredns ready: false, restart count 0 Mar 22 00:02:10.673: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:10.673: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:10.673: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:10.673: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:10.673: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:10.673: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:10.673: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:02:10.723712 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:10.981: INFO: Latency metrics for node latest-worker Mar 22 00:02:10.981: INFO: Logging node info for node latest-worker2 Mar 22 00:02:11.027: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:11.028: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:11.117: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:11.147: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.147: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:02:11.173466 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:11.456: INFO: Latency metrics for node latest-worker2 Mar 22 00:02:11.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4667" for this suite. • Failure in Spec Setup (BeforeEach) [1.793 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Mar 22 00:02:10.014: Unexpected error: <*errors.errorString | 0xc001c4ab60>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":26,"skipped":1336,"failed":6,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:11.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 Mar 22 00:02:11.703: FAIL: Unexpected error: <*errors.errorString | 0xc000fc1160>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func21.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 +0xac k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-3364". STEP: Found 0 events. Mar 22 00:02:11.800: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:02:11.800: INFO: Mar 22 00:02:11.829: INFO: Logging node info for node latest-control-plane Mar 22 00:02:11.861: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:11.862: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:02:11.914: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:02:11.957: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container etcd ready: true, restart count 0 Mar 22 00:02:11.957: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:11.957: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:11.957: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:02:11.957: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:02:11.957: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:02:11.957: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:02:11.957: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container coredns ready: true, restart count 0 Mar 22 00:02:11.957: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:11.957: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:02:12.016691 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:12.201: INFO: Latency metrics for node latest-control-plane Mar 22 00:02:12.201: INFO: Logging node info for node latest-worker Mar 22 00:02:12.211: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:12.212: INFO: Logging kubelet events for node latest-worker Mar 22 00:02:12.260: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:02:12.299: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container pause ready: true, restart count 0 Mar 22 00:02:12.299: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:12.299: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container chaos-daemon ready: false, restart count 0 Mar 22 00:02:12.299: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:12.299: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:02:12.299: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:02:12.299: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container kindnet-cni ready: false, restart count 0 Mar 22 00:02:12.299: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container token-test ready: false, restart count 0 Mar 22 00:02:12.299: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.299: INFO: Container coredns ready: false, restart count 0 W0322 00:02:12.330659 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:12.643: INFO: Latency metrics for node latest-worker Mar 22 00:02:12.643: INFO: Logging node info for node latest-worker2 Mar 22 00:02:12.706: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:02:12.707: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:02:12.761: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:02:12.786: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:02:12.786: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:02:12.830139 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:02:13.147: INFO: Latency metrics for node latest-worker2 Mar 22 00:02:13.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3364" for this suite. • Failure in Spec Setup (BeforeEach) [1.744 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Mar 22 00:02:11.703: Unexpected error: <*errors.errorString | 0xc000fc1160>: { s: "there are currently no ready, schedulable nodes in the cluster", } there are currently no ready, schedulable nodes in the cluster occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":26,"skipped":1414,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:13.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-c5d271e5-78bf-4113-aecf-124d69242af3 STEP: Creating a pod to test consume configMaps Mar 22 00:02:13.505: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7" in namespace "projected-6659" to be "Succeeded or Failed" Mar 22 00:02:13.556: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.551003ms Mar 22 00:02:15.572: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06689296s Mar 22 00:02:17.603: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098228371s Mar 22 00:02:19.663: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158054889s Mar 22 00:02:21.667: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161994142s Mar 22 00:02:23.724: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219239716s Mar 22 00:02:25.777: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.271662008s Mar 22 00:02:27.843: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.338244747s Mar 22 00:02:29.909: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.404512578s Mar 22 00:02:31.914: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.4087337s Mar 22 00:02:33.999: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.493773554s Mar 22 00:02:36.071: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.565984392s Mar 22 00:02:38.125: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.620382902s Mar 22 00:02:40.131: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.625936581s Mar 22 00:02:42.156: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.650990817s Mar 22 00:02:44.177: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.671881345s Mar 22 00:02:46.194: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.689179049s Mar 22 00:02:48.303: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.798537383s Mar 22 00:02:50.330: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.825116493s Mar 22 00:02:52.502: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.996922513s Mar 22 00:02:54.538: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 41.03340298s Mar 22 00:02:56.791: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.286385307s Mar 22 00:02:59.354: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 45.848879962s Mar 22 00:03:01.671: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.16564144s Mar 22 00:03:04.891: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.386244166s Mar 22 00:03:07.336: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.830613164s Mar 22 00:03:09.389: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 55.883686s STEP: Saw pod success Mar 22 00:03:09.389: INFO: Pod "pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7" satisfied condition "Succeeded or Failed" Mar 22 00:03:09.533: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7 container agnhost-container: STEP: delete the pod Mar 22 00:03:09.738: INFO: Waiting for pod pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7 to disappear Mar 22 00:03:09.755: INFO: Pod pod-projected-configmaps-da1c5a0a-814b-45d6-a8c8-972c8005acf7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6659" for this suite. • [SLOW TEST:56.619 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":27,"skipped":1469,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:09.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 22 00:03:14.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3286 PodName:hostexec-latest-worker-4df55 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:03:14.161: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:03:14.279: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 22 00:03:14.279: INFO: exec latest-worker: stdout: "0\n" Mar 22 00:03:14.279: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 22 00:03:14.279: INFO: exec latest-worker: exit code: 0 Mar 22 00:03:14.279: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:14.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3286" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.495 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:14.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:03:19.015: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2ca6ae0a-6b77-4107-b6e1-a5003e2f0af7] Namespace:persistent-local-volumes-test-7021 PodName:hostexec-latest-worker2-mqg8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:03:19.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:03:19.135: INFO: Creating a PV followed by a PVC Mar 22 00:03:19.579: INFO: Waiting for PV local-pvfc2xv to bind to PVC pvc-wzhfk Mar 22 00:03:19.579: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wzhfk] to have phase Bound Mar 22 00:03:19.849: INFO: PersistentVolumeClaim pvc-wzhfk found but phase is Pending instead of Bound. Mar 22 00:03:21.879: INFO: PersistentVolumeClaim pvc-wzhfk found but phase is Pending instead of Bound. Mar 22 00:03:23.886: INFO: PersistentVolumeClaim pvc-wzhfk found but phase is Pending instead of Bound. Mar 22 00:03:25.958: INFO: PersistentVolumeClaim pvc-wzhfk found but phase is Pending instead of Bound. Mar 22 00:03:28.148: INFO: PersistentVolumeClaim pvc-wzhfk found and phase=Bound (8.569200444s) Mar 22 00:03:28.148: INFO: Waiting up to 3m0s for PersistentVolume local-pvfc2xv to have phase Bound Mar 22 00:03:28.461: INFO: PersistentVolume local-pvfc2xv found and phase=Bound (312.895197ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:03:28.557: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:03:28.558: INFO: Deleting PersistentVolumeClaim "pvc-wzhfk" Mar 22 00:03:29.157: INFO: Deleting PersistentVolume "local-pvfc2xv" STEP: Removing the test directory Mar 22 00:03:29.292: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2ca6ae0a-6b77-4107-b6e1-a5003e2f0af7] Namespace:persistent-local-volumes-test-7021 PodName:hostexec-latest-worker2-mqg8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:03:29.292: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:29.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7021" for this suite. S [SKIPPING] [15.239 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:29.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4810 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:03:30.495: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-attacher Mar 22 00:03:30.507: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4810 Mar 22 00:03:30.507: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4810 Mar 22 00:03:30.551: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4810 Mar 22 00:03:30.574: INFO: creating *v1.Role: csi-mock-volumes-4810-3379/external-attacher-cfg-csi-mock-volumes-4810 Mar 22 00:03:30.593: INFO: creating *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-attacher-role-cfg Mar 22 00:03:30.633: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-provisioner Mar 22 00:03:30.693: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4810 Mar 22 00:03:30.694: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4810 Mar 22 00:03:30.706: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4810 Mar 22 00:03:30.715: INFO: creating *v1.Role: csi-mock-volumes-4810-3379/external-provisioner-cfg-csi-mock-volumes-4810 Mar 22 00:03:30.724: INFO: creating *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-provisioner-role-cfg Mar 22 00:03:30.760: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-resizer Mar 22 00:03:30.820: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4810 Mar 22 00:03:30.820: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4810 Mar 22 00:03:30.825: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4810 Mar 22 00:03:30.843: INFO: creating *v1.Role: csi-mock-volumes-4810-3379/external-resizer-cfg-csi-mock-volumes-4810 Mar 22 00:03:30.855: INFO: creating *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-resizer-role-cfg Mar 22 00:03:30.879: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-snapshotter Mar 22 00:03:30.898: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4810 Mar 22 00:03:30.898: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4810 Mar 22 00:03:30.953: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4810 Mar 22 00:03:30.981: INFO: creating *v1.Role: csi-mock-volumes-4810-3379/external-snapshotter-leaderelection-csi-mock-volumes-4810 Mar 22 00:03:31.018: INFO: creating *v1.RoleBinding: csi-mock-volumes-4810-3379/external-snapshotter-leaderelection Mar 22 00:03:31.034: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-mock Mar 22 00:03:31.083: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4810 Mar 22 00:03:31.088: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4810 Mar 22 00:03:31.113: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4810 Mar 22 00:03:31.131: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4810 Mar 22 00:03:31.155: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4810 Mar 22 00:03:31.173: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4810 Mar 22 00:03:31.215: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4810 Mar 22 00:03:31.221: INFO: creating *v1.StatefulSet: csi-mock-volumes-4810-3379/csi-mockplugin Mar 22 00:03:31.239: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4810 Mar 22 00:03:31.263: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4810" Mar 22 00:03:31.341: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4810 to register on node latest-worker STEP: Creating pod Mar 22 00:03:42.341: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:03:42.426: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-jxzjq] to have phase Bound Mar 22 00:03:42.592: INFO: PersistentVolumeClaim pvc-jxzjq found but phase is Pending instead of Bound. Mar 22 00:03:44.608: INFO: PersistentVolumeClaim pvc-jxzjq found and phase=Bound (2.182055742s) Mar 22 00:03:48.749: INFO: Deleting pod "pvc-volume-tester-rhp8s" in namespace "csi-mock-volumes-4810" Mar 22 00:03:48.838: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rhp8s" to be fully deleted STEP: Checking PVC events Mar 22 00:04:38.023: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6981053", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ddbce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ddbcf8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0024d62a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0024d62b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:04:38.023: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6981057", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4810"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f397b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f397d0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f397e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f39800)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00247f4a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00247f4b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:04:38.023: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6981062", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4810"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94d68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94d80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94d98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94db0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", StorageClassName:(*string)(0xc0023de850), VolumeMode:(*v1.PersistentVolumeMode)(0xc0023de860), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:04:38.023: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6981066", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4810"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94df8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94e10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94e28)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", StorageClassName:(*string)(0xc0023de890), VolumeMode:(*v1.PersistentVolumeMode)(0xc0023de8a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:04:38.023: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6982760", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc004f94e58), DeletionGracePeriodSeconds:(*int64)(0xc0053a3308), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4810"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94e88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94ea0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94eb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94ed0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", StorageClassName:(*string)(0xc0023de8e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0023de8f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:04:38.023: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-jxzjq", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4810", SelfLink:"", UID:"a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", ResourceVersion:"6982765", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968222, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc004f94f00), DeletionGracePeriodSeconds:(*int64)(0xc0053a33b8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4810"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94f18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94f30)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004f94f48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004f94f60)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0e7e63c-ae9c-4fa8-ba86-1a0676030abe", StorageClassName:(*string)(0xc0023de930), VolumeMode:(*v1.PersistentVolumeMode)(0xc0023de940), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-rhp8s Mar 22 00:04:38.023: INFO: Deleting pod "pvc-volume-tester-rhp8s" in namespace "csi-mock-volumes-4810" STEP: Deleting claim pvc-jxzjq STEP: Deleting storageclass csi-mock-volumes-4810-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4810 STEP: Waiting for namespaces [csi-mock-volumes-4810] to vanish STEP: uninstalling csi mock driver Mar 22 00:04:48.489: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-attacher Mar 22 00:04:48.547: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4810 Mar 22 00:04:48.559: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4810 Mar 22 00:04:48.609: INFO: deleting *v1.Role: csi-mock-volumes-4810-3379/external-attacher-cfg-csi-mock-volumes-4810 Mar 22 00:04:48.647: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-attacher-role-cfg Mar 22 00:04:48.677: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-provisioner Mar 22 00:04:48.782: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4810 Mar 22 00:04:48.798: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4810 Mar 22 00:04:48.837: INFO: deleting *v1.Role: csi-mock-volumes-4810-3379/external-provisioner-cfg-csi-mock-volumes-4810 Mar 22 00:04:48.929: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-provisioner-role-cfg Mar 22 00:04:48.979: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-resizer Mar 22 00:04:49.035: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4810 Mar 22 00:04:49.157: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4810 Mar 22 00:04:49.185: INFO: deleting *v1.Role: csi-mock-volumes-4810-3379/external-resizer-cfg-csi-mock-volumes-4810 Mar 22 00:04:49.252: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4810-3379/csi-resizer-role-cfg Mar 22 00:04:49.332: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-snapshotter Mar 22 00:04:49.353: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4810 Mar 22 00:04:49.454: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4810 Mar 22 00:04:49.552: INFO: deleting *v1.Role: csi-mock-volumes-4810-3379/external-snapshotter-leaderelection-csi-mock-volumes-4810 Mar 22 00:04:49.634: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4810-3379/external-snapshotter-leaderelection Mar 22 00:04:49.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4810-3379/csi-mock Mar 22 00:04:49.719: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4810 Mar 22 00:04:49.766: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4810 Mar 22 00:04:49.861: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4810 Mar 22 00:04:49.911: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4810 Mar 22 00:04:50.018: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4810 Mar 22 00:04:50.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4810 Mar 22 00:04:50.690: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4810 Mar 22 00:04:50.822: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4810-3379/csi-mockplugin Mar 22 00:04:50.983: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4810 STEP: deleting the driver namespace: csi-mock-volumes-4810-3379 STEP: Waiting for namespaces [csi-mock-volumes-4810-3379] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:39.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:129.643 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":133,"completed":28,"skipped":1609,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:39.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:05:39.435: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:39.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3395" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.301 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:39.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-877 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:05:39.927: INFO: creating *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-attacher Mar 22 00:05:39.936: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-877 Mar 22 00:05:39.936: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-877 Mar 22 00:05:39.960: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-877 Mar 22 00:05:39.996: INFO: creating *v1.Role: csi-mock-volumes-877-9047/external-attacher-cfg-csi-mock-volumes-877 Mar 22 00:05:40.038: INFO: creating *v1.RoleBinding: csi-mock-volumes-877-9047/csi-attacher-role-cfg Mar 22 00:05:40.049: INFO: creating *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-provisioner Mar 22 00:05:40.086: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-877 Mar 22 00:05:40.086: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-877 Mar 22 00:05:40.103: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-877 Mar 22 00:05:40.108: INFO: creating *v1.Role: csi-mock-volumes-877-9047/external-provisioner-cfg-csi-mock-volumes-877 Mar 22 00:05:40.169: INFO: creating *v1.RoleBinding: csi-mock-volumes-877-9047/csi-provisioner-role-cfg Mar 22 00:05:40.195: INFO: creating *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-resizer Mar 22 00:05:40.228: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-877 Mar 22 00:05:40.228: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-877 Mar 22 00:05:40.248: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-877 Mar 22 00:05:40.258: INFO: creating *v1.Role: csi-mock-volumes-877-9047/external-resizer-cfg-csi-mock-volumes-877 Mar 22 00:05:40.264: INFO: creating *v1.RoleBinding: csi-mock-volumes-877-9047/csi-resizer-role-cfg Mar 22 00:05:40.301: INFO: creating *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-snapshotter Mar 22 00:05:40.326: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-877 Mar 22 00:05:40.326: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-877 Mar 22 00:05:40.342: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-877 Mar 22 00:05:40.349: INFO: creating *v1.Role: csi-mock-volumes-877-9047/external-snapshotter-leaderelection-csi-mock-volumes-877 Mar 22 00:05:40.373: INFO: creating *v1.RoleBinding: csi-mock-volumes-877-9047/external-snapshotter-leaderelection Mar 22 00:05:40.434: INFO: creating *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-mock Mar 22 00:05:40.452: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-877 Mar 22 00:05:40.481: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-877 Mar 22 00:05:40.504: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-877 Mar 22 00:05:40.530: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-877 Mar 22 00:05:40.565: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-877 Mar 22 00:05:40.618: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-877 Mar 22 00:05:40.649: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-877 Mar 22 00:05:40.678: INFO: creating *v1.StatefulSet: csi-mock-volumes-877-9047/csi-mockplugin Mar 22 00:05:40.704: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-877 Mar 22 00:05:40.758: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-877" Mar 22 00:05:40.810: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-877 to register on node latest-worker2 STEP: Creating pod Mar 22 00:05:50.941: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:05:51.038: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-m8vtw] to have phase Bound Mar 22 00:05:51.089: INFO: PersistentVolumeClaim pvc-m8vtw found but phase is Pending instead of Bound. Mar 22 00:05:53.126: INFO: PersistentVolumeClaim pvc-m8vtw found and phase=Bound (2.088114712s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-xcvtg Mar 22 00:05:58.007: INFO: Deleting pod "pvc-volume-tester-xcvtg" in namespace "csi-mock-volumes-877" Mar 22 00:05:58.057: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xcvtg" to be fully deleted STEP: Deleting claim pvc-m8vtw Mar 22 00:06:16.507: INFO: Waiting up to 2m0s for PersistentVolume pvc-cfa2b42c-71c1-4f7e-b38b-3e1697cf61bf to get deleted Mar 22 00:06:16.522: INFO: PersistentVolume pvc-cfa2b42c-71c1-4f7e-b38b-3e1697cf61bf found and phase=Bound (14.388604ms) Mar 22 00:06:18.691: INFO: PersistentVolume pvc-cfa2b42c-71c1-4f7e-b38b-3e1697cf61bf was removed STEP: Deleting storageclass csi-mock-volumes-877-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-877 STEP: Waiting for namespaces [csi-mock-volumes-877] to vanish STEP: uninstalling csi mock driver Mar 22 00:06:30.861: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-attacher Mar 22 00:06:30.939: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-877 Mar 22 00:06:30.997: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-877 Mar 22 00:06:31.046: INFO: deleting *v1.Role: csi-mock-volumes-877-9047/external-attacher-cfg-csi-mock-volumes-877 Mar 22 00:06:31.166: INFO: deleting *v1.RoleBinding: csi-mock-volumes-877-9047/csi-attacher-role-cfg Mar 22 00:06:31.220: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-provisioner Mar 22 00:06:31.298: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-877 Mar 22 00:06:31.339: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-877 Mar 22 00:06:31.357: INFO: deleting *v1.Role: csi-mock-volumes-877-9047/external-provisioner-cfg-csi-mock-volumes-877 Mar 22 00:06:31.387: INFO: deleting *v1.RoleBinding: csi-mock-volumes-877-9047/csi-provisioner-role-cfg Mar 22 00:06:31.433: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-resizer Mar 22 00:06:31.483: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-877 Mar 22 00:06:31.573: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-877 Mar 22 00:06:31.614: INFO: deleting *v1.Role: csi-mock-volumes-877-9047/external-resizer-cfg-csi-mock-volumes-877 Mar 22 00:06:31.706: INFO: deleting *v1.RoleBinding: csi-mock-volumes-877-9047/csi-resizer-role-cfg Mar 22 00:06:31.738: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-snapshotter Mar 22 00:06:31.777: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-877 Mar 22 00:06:31.846: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-877 Mar 22 00:06:31.863: INFO: deleting *v1.Role: csi-mock-volumes-877-9047/external-snapshotter-leaderelection-csi-mock-volumes-877 Mar 22 00:06:31.902: INFO: deleting *v1.RoleBinding: csi-mock-volumes-877-9047/external-snapshotter-leaderelection Mar 22 00:06:32.083: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-877-9047/csi-mock Mar 22 00:06:32.187: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-877 Mar 22 00:06:32.302: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-877 Mar 22 00:06:32.360: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-877 Mar 22 00:06:32.681: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-877 Mar 22 00:06:32.986: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-877 Mar 22 00:06:33.058: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-877 Mar 22 00:06:33.305: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-877 Mar 22 00:06:33.419: INFO: deleting *v1.StatefulSet: csi-mock-volumes-877-9047/csi-mockplugin Mar 22 00:06:33.670: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-877 STEP: deleting the driver namespace: csi-mock-volumes-877-9047 STEP: Waiting for namespaces [csi-mock-volumes-877-9047] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:17.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:98.220 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":133,"completed":29,"skipped":1684,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:17.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:07:21.975: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dcaab348-e285-4eb8-82cc-72a1b1bd749c] Namespace:persistent-local-volumes-test-9314 PodName:hostexec-latest-worker2-tm98r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:07:21.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:07:22.078: INFO: Creating a PV followed by a PVC Mar 22 00:07:22.096: INFO: Waiting for PV local-pv4zzf7 to bind to PVC pvc-lxb4s Mar 22 00:07:22.096: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lxb4s] to have phase Bound Mar 22 00:07:22.137: INFO: PersistentVolumeClaim pvc-lxb4s found but phase is Pending instead of Bound. Mar 22 00:07:24.141: INFO: PersistentVolumeClaim pvc-lxb4s found but phase is Pending instead of Bound. Mar 22 00:07:26.146: INFO: PersistentVolumeClaim pvc-lxb4s found but phase is Pending instead of Bound. Mar 22 00:07:28.151: INFO: PersistentVolumeClaim pvc-lxb4s found and phase=Bound (6.055688022s) Mar 22 00:07:28.151: INFO: Waiting up to 3m0s for PersistentVolume local-pv4zzf7 to have phase Bound Mar 22 00:07:28.155: INFO: PersistentVolume local-pv4zzf7 found and phase=Bound (3.745478ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:07:32.227: INFO: pod "pod-e31afba3-11cf-4af8-a510-82f6cf48f272" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:07:32.227: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9314 PodName:pod-e31afba3-11cf-4af8-a510-82f6cf48f272 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:07:32.227: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:07:32.317: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:07:32.318: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9314 PodName:pod-e31afba3-11cf-4af8-a510-82f6cf48f272 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:07:32.318: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:07:32.436: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:07:36.475: INFO: pod "pod-5b2c149d-c228-4348-aef4-5728ef75911b" created on Node "latest-worker2" Mar 22 00:07:36.475: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9314 PodName:pod-5b2c149d-c228-4348-aef4-5728ef75911b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:07:36.475: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:07:36.575: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:07:36.575: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-dcaab348-e285-4eb8-82cc-72a1b1bd749c > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9314 PodName:pod-5b2c149d-c228-4348-aef4-5728ef75911b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:07:36.575: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:07:36.747: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-dcaab348-e285-4eb8-82cc-72a1b1bd749c > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:07:36.748: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9314 PodName:pod-e31afba3-11cf-4af8-a510-82f6cf48f272 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:07:36.748: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:07:36.890: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-dcaab348-e285-4eb8-82cc-72a1b1bd749c", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-e31afba3-11cf-4af8-a510-82f6cf48f272 in namespace persistent-local-volumes-test-9314 STEP: Deleting pod2 STEP: Deleting pod pod-5b2c149d-c228-4348-aef4-5728ef75911b in namespace persistent-local-volumes-test-9314 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:07:36.929: INFO: Deleting PersistentVolumeClaim "pvc-lxb4s" Mar 22 00:07:36.950: INFO: Deleting PersistentVolume "local-pv4zzf7" STEP: Removing the test directory Mar 22 00:07:36.965: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dcaab348-e285-4eb8-82cc-72a1b1bd749c] Namespace:persistent-local-volumes-test-9314 PodName:hostexec-latest-worker2-tm98r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:07:36.965: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:37.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9314" for this suite. • [SLOW TEST:19.399 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":30,"skipped":1687,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:37.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5320 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:07:38.565: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-attacher Mar 22 00:07:38.705: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5320 Mar 22 00:07:38.705: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5320 Mar 22 00:07:38.797: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5320 Mar 22 00:07:39.021: INFO: creating *v1.Role: csi-mock-volumes-5320-8834/external-attacher-cfg-csi-mock-volumes-5320 Mar 22 00:07:39.208: INFO: creating *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-attacher-role-cfg Mar 22 00:07:39.271: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-provisioner Mar 22 00:07:39.520: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5320 Mar 22 00:07:39.520: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5320 Mar 22 00:07:39.527: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5320 Mar 22 00:07:39.620: INFO: creating *v1.Role: csi-mock-volumes-5320-8834/external-provisioner-cfg-csi-mock-volumes-5320 Mar 22 00:07:39.830: INFO: creating *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-provisioner-role-cfg Mar 22 00:07:39.852: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-resizer Mar 22 00:07:39.903: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5320 Mar 22 00:07:39.903: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5320 Mar 22 00:07:39.924: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5320 Mar 22 00:07:39.962: INFO: creating *v1.Role: csi-mock-volumes-5320-8834/external-resizer-cfg-csi-mock-volumes-5320 Mar 22 00:07:39.975: INFO: creating *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-resizer-role-cfg Mar 22 00:07:39.983: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-snapshotter Mar 22 00:07:40.008: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5320 Mar 22 00:07:40.008: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5320 Mar 22 00:07:40.019: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5320 Mar 22 00:07:40.025: INFO: creating *v1.Role: csi-mock-volumes-5320-8834/external-snapshotter-leaderelection-csi-mock-volumes-5320 Mar 22 00:07:40.031: INFO: creating *v1.RoleBinding: csi-mock-volumes-5320-8834/external-snapshotter-leaderelection Mar 22 00:07:40.093: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-mock Mar 22 00:07:40.097: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5320 Mar 22 00:07:40.129: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5320 Mar 22 00:07:40.188: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5320 Mar 22 00:07:40.243: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5320 Mar 22 00:07:40.434: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5320 Mar 22 00:07:40.438: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5320 Mar 22 00:07:40.469: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5320 Mar 22 00:07:40.529: INFO: creating *v1.StatefulSet: csi-mock-volumes-5320-8834/csi-mockplugin Mar 22 00:07:40.713: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5320 Mar 22 00:07:40.759: INFO: creating *v1.StatefulSet: csi-mock-volumes-5320-8834/csi-mockplugin-attacher Mar 22 00:07:40.795: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5320" Mar 22 00:07:40.870: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5320 to register on node latest-worker STEP: Creating pod Mar 22 00:07:50.488: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:07:50.800: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-b7ddn] to have phase Bound Mar 22 00:07:50.860: INFO: PersistentVolumeClaim pvc-b7ddn found but phase is Pending instead of Bound. Mar 22 00:07:52.866: INFO: PersistentVolumeClaim pvc-b7ddn found and phase=Bound (2.065664247s) STEP: checking for CSIInlineVolumes feature Mar 22 00:08:01.441: INFO: Pod inline-volume-hf475 has the following logs: Mar 22 00:08:01.620: INFO: Deleting pod "inline-volume-hf475" in namespace "csi-mock-volumes-5320" Mar 22 00:08:01.625: INFO: Wait up to 5m0s for pod "inline-volume-hf475" to be fully deleted STEP: Deleting the previously created pod Mar 22 00:08:16.017: INFO: Deleting pod "pvc-volume-tester-d6629" in namespace "csi-mock-volumes-5320" Mar 22 00:08:16.062: INFO: Wait up to 5m0s for pod "pvc-volume-tester-d6629" to be fully deleted STEP: Checking CSI driver logs Mar 22 00:08:36.324: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 51dde242-5c38-4556-9e37-80b65d4067f7 Mar 22 00:08:36.324: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Mar 22 00:08:36.324: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Mar 22 00:08:36.324: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-d6629 Mar 22 00:08:36.324: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5320 Mar 22 00:08:36.324: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/51dde242-5c38-4556-9e37-80b65d4067f7/volumes/kubernetes.io~csi/pvc-73cff02d-faf6-4058-8886-371ab22a1d97/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-d6629 Mar 22 00:08:36.324: INFO: Deleting pod "pvc-volume-tester-d6629" in namespace "csi-mock-volumes-5320" STEP: Deleting claim pvc-b7ddn Mar 22 00:08:36.405: INFO: Waiting up to 2m0s for PersistentVolume pvc-73cff02d-faf6-4058-8886-371ab22a1d97 to get deleted Mar 22 00:08:36.410: INFO: PersistentVolume pvc-73cff02d-faf6-4058-8886-371ab22a1d97 found and phase=Bound (5.150641ms) Mar 22 00:08:38.414: INFO: PersistentVolume pvc-73cff02d-faf6-4058-8886-371ab22a1d97 was removed STEP: Deleting storageclass csi-mock-volumes-5320-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5320 STEP: Waiting for namespaces [csi-mock-volumes-5320] to vanish STEP: uninstalling csi mock driver Mar 22 00:08:46.439: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-attacher Mar 22 00:08:46.444: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5320 Mar 22 00:08:46.464: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5320 Mar 22 00:08:46.472: INFO: deleting *v1.Role: csi-mock-volumes-5320-8834/external-attacher-cfg-csi-mock-volumes-5320 Mar 22 00:08:46.507: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-attacher-role-cfg Mar 22 00:08:46.529: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-provisioner Mar 22 00:08:46.574: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5320 Mar 22 00:08:46.604: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5320 Mar 22 00:08:46.667: INFO: deleting *v1.Role: csi-mock-volumes-5320-8834/external-provisioner-cfg-csi-mock-volumes-5320 Mar 22 00:08:46.682: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-provisioner-role-cfg Mar 22 00:08:46.688: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-resizer Mar 22 00:08:46.724: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5320 Mar 22 00:08:46.779: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5320 Mar 22 00:08:46.812: INFO: deleting *v1.Role: csi-mock-volumes-5320-8834/external-resizer-cfg-csi-mock-volumes-5320 Mar 22 00:08:46.819: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5320-8834/csi-resizer-role-cfg Mar 22 00:08:46.830: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-snapshotter Mar 22 00:08:46.850: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5320 Mar 22 00:08:46.872: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5320 Mar 22 00:08:46.899: INFO: deleting *v1.Role: csi-mock-volumes-5320-8834/external-snapshotter-leaderelection-csi-mock-volumes-5320 Mar 22 00:08:46.909: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5320-8834/external-snapshotter-leaderelection Mar 22 00:08:46.915: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5320-8834/csi-mock Mar 22 00:08:46.921: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5320 Mar 22 00:08:47.082: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5320 Mar 22 00:08:47.106: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5320 Mar 22 00:08:47.113: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5320 Mar 22 00:08:47.118: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5320 Mar 22 00:08:47.125: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5320 Mar 22 00:08:47.147: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5320 Mar 22 00:08:47.187: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5320-8834/csi-mockplugin Mar 22 00:08:47.204: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5320 Mar 22 00:08:47.208: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5320-8834/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5320-8834 STEP: Waiting for namespaces [csi-mock-volumes-5320-8834] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:45.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:128.067 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":133,"completed":31,"skipped":1760,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:45.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 00:09:45.384: INFO: Waiting up to 5m0s for pod "pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3" in namespace "emptydir-8973" to be "Succeeded or Failed" Mar 22 00:09:45.388: INFO: Pod "pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092042ms Mar 22 00:09:47.412: INFO: Pod "pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027751876s Mar 22 00:09:49.425: INFO: Pod "pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04040795s STEP: Saw pod success Mar 22 00:09:49.425: INFO: Pod "pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3" satisfied condition "Succeeded or Failed" Mar 22 00:09:49.428: INFO: Trying to get logs from node latest-worker2 pod pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3 container test-container: STEP: delete the pod Mar 22 00:09:49.719: INFO: Waiting for pod pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3 to disappear Mar 22 00:09:49.790: INFO: Pod pod-b9a9b94b-11b8-42ca-a45e-02c050ab53d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:49.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8973" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":133,"completed":32,"skipped":1801,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:49.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-9b45d892-27c3-4bbd-bc8a-b381ea25a821 STEP: Creating a pod to test consume configMaps Mar 22 00:09:49.997: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf" in namespace "projected-6229" to be "Succeeded or Failed" Mar 22 00:09:50.023: INFO: Pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 26.558218ms Mar 22 00:09:52.028: INFO: Pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03099544s Mar 22 00:09:54.032: INFO: Pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035924122s Mar 22 00:09:56.038: INFO: Pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041355932s STEP: Saw pod success Mar 22 00:09:56.038: INFO: Pod "pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf" satisfied condition "Succeeded or Failed" Mar 22 00:09:56.041: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf container agnhost-container: STEP: delete the pod Mar 22 00:09:56.103: INFO: Waiting for pod pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf to disappear Mar 22 00:09:56.109: INFO: Pod pod-projected-configmaps-2d709e94-1288-4201-898d-7a2088891ccf no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:56.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6229" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":33,"skipped":1809,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:56.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:10:00.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend && mount --bind /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend && ln -s /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346] Namespace:persistent-local-volumes-test-9032 PodName:hostexec-latest-worker2-6k7df ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:10:00.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:10:00.518: INFO: Creating a PV followed by a PVC Mar 22 00:10:00.533: INFO: Waiting for PV local-pvn2r6d to bind to PVC pvc-rljwz Mar 22 00:10:00.533: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rljwz] to have phase Bound Mar 22 00:10:00.551: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:02.557: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:04.561: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:06.565: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:08.571: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:10.576: INFO: PersistentVolumeClaim pvc-rljwz found but phase is Pending instead of Bound. Mar 22 00:10:12.593: INFO: PersistentVolumeClaim pvc-rljwz found and phase=Bound (12.059961752s) Mar 22 00:10:12.593: INFO: Waiting up to 3m0s for PersistentVolume local-pvn2r6d to have phase Bound Mar 22 00:10:12.672: INFO: PersistentVolume local-pvn2r6d found and phase=Bound (78.919773ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:10:18.823: INFO: pod "pod-921add7c-7d6f-40d0-8b90-02e4d886cd76" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:10:18.823: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9032 PodName:pod-921add7c-7d6f-40d0-8b90-02e4d886cd76 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:10:18.823: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:18.926: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:10:18.926: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9032 PodName:pod-921add7c-7d6f-40d0-8b90-02e4d886cd76 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:10:18.926: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:19.063: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:10:25.135: INFO: pod "pod-7d353bd6-4d97-447f-a11d-ca1576e7a7f0" created on Node "latest-worker2" Mar 22 00:10:25.135: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9032 PodName:pod-7d353bd6-4d97-447f-a11d-ca1576e7a7f0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:10:25.135: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:25.221: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:10:25.221: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9032 PodName:pod-7d353bd6-4d97-447f-a11d-ca1576e7a7f0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:10:25.221: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:25.309: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:10:25.309: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9032 PodName:pod-921add7c-7d6f-40d0-8b90-02e4d886cd76 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:10:25.309: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:25.390: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-921add7c-7d6f-40d0-8b90-02e4d886cd76 in namespace persistent-local-volumes-test-9032 STEP: Deleting pod2 STEP: Deleting pod pod-7d353bd6-4d97-447f-a11d-ca1576e7a7f0 in namespace persistent-local-volumes-test-9032 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:10:25.491: INFO: Deleting PersistentVolumeClaim "pvc-rljwz" Mar 22 00:10:25.546: INFO: Deleting PersistentVolume "local-pvn2r6d" STEP: Removing the test directory Mar 22 00:10:25.588: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346 && umount /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend && rm -r /tmp/local-volume-test-ba5be817-d539-4566-9e9d-33622b6b6346-backend] Namespace:persistent-local-volumes-test-9032 PodName:hostexec-latest-worker2-6k7df ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:10:25.588: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:25.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9032" for this suite. • [SLOW TEST:29.606 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":34,"skipped":1834,"failed":7,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2"]} SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:25.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Mar 22 00:10:25.846: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8796" to be "Succeeded or Failed" Mar 22 00:10:25.856: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.37905ms Mar 22 00:10:27.860: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014380649s Mar 22 00:10:29.864: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018115036s Mar 22 00:10:31.905: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059275548s Mar 22 00:10:33.909: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063285295s STEP: Saw pod success Mar 22 00:10:33.909: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 22 00:10:33.911: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 22 00:10:34.271: INFO: Waiting for pod pod-host-path-test to disappear Mar 22 00:10:34.274: INFO: Pod pod-host-path-test no longer exists Mar 22 00:10:34.274: FAIL: Unexpected error: <*errors.errorString | 0xc002f422c0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00063da20, 0x6b6efc8, 0xd, 0xc00188e800, 0x0, 0xc000fb51c0, 0x1, 0x1, 0x6d64568) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:564 k8s.io/kubernetes/test/e2e/common/storage.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:59 +0x299 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "hostpath-8796". STEP: Found 7 events. Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:25 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-8796/pod-host-path-test to latest-worker Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:28 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:30 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Created: Created container test-container-1 Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:30 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Started: Started container test-container-1 Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:30 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:31 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Created: Created container test-container-2 Mar 22 00:10:34.319: INFO: At 2021-03-22 00:10:32 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Started: Started container test-container-2 Mar 22 00:10:34.322: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:10:34.323: INFO: Mar 22 00:10:34.327: INFO: Logging node info for node latest-control-plane Mar 22 00:10:34.330: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6987794 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:09:34 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:09:34 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:09:34 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:09:34 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:10:34.330: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:10:34.366: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:10:34.384: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:10:34.384: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:10:34.384: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:10:34.384: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:10:34.384: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:10:34.384: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container coredns ready: true, restart count 0 Mar 22 00:10:34.384: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container etcd ready: true, restart count 0 Mar 22 00:10:34.384: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:10:34.384: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.384: INFO: Container coredns ready: true, restart count 0 W0322 00:10:34.410378 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:10:34.493: INFO: Latency metrics for node latest-control-plane Mar 22 00:10:34.493: INFO: Logging node info for node latest-worker Mar 22 00:10:34.497: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6987474 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:24 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:24 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:24 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:08:24 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:10:34.497: INFO: Logging kubelet events for node latest-worker Mar 22 00:10:34.505: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:10:34.587: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:10:34.587: INFO: host-test-container-pod started at 2021-03-22 00:08:50 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:10:34.587: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:10:34.587: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:10:34.587: INFO: pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55 started at 2021-03-22 00:10:26 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container nginx ready: true, restart count 0 Mar 22 00:10:34.587: INFO: netserver-0 started at 2021-03-22 00:08:25 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container webserver ready: true, restart count 0 Mar 22 00:10:34.587: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:34.587: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:10:34.593992 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:10:34.988: INFO: Latency metrics for node latest-worker Mar 22 00:10:34.988: INFO: Logging node info for node latest-worker2 Mar 22 00:10:34.994: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6987407 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:44 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:44 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:08:44 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:08:44 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:10:34.995: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:10:35.001: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:10:35.007: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:10:35.007: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:10:35.007: INFO: netserver-1 started at 2021-03-22 00:08:25 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container webserver ready: true, restart count 0 Mar 22 00:10:35.007: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:10:35.007: INFO: pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49 started at 2021-03-22 00:10:33 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container test-container ready: false, restart count 0 Mar 22 00:10:35.007: INFO: test-container-pod started at 2021-03-22 00:08:50 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container webserver ready: true, restart count 0 Mar 22 00:10:35.007: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:10:35.007: INFO: Container back-off-cap ready: false, restart count 6 W0322 00:10:35.071107 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:10:35.692: INFO: Latency metrics for node latest-worker2 Mar 22 00:10:35.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8796" for this suite. • Failure [9.975 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 Mar 22 00:10:34.274: Unexpected error: <*errors.errorString | 0xc002f422c0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 ------------------------------ {"msg":"FAILED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":133,"completed":34,"skipped":1843,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:35.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-3237 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 22 00:10:36.712: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-attacher Mar 22 00:10:36.737: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3237 Mar 22 00:10:36.737: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3237 Mar 22 00:10:36.749: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3237 Mar 22 00:10:36.755: INFO: creating *v1.Role: csi-mock-volumes-3237-7895/external-attacher-cfg-csi-mock-volumes-3237 Mar 22 00:10:36.776: INFO: creating *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-attacher-role-cfg Mar 22 00:10:36.801: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-provisioner Mar 22 00:10:36.874: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3237 Mar 22 00:10:36.874: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3237 Mar 22 00:10:36.884: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3237 Mar 22 00:10:36.889: INFO: creating *v1.Role: csi-mock-volumes-3237-7895/external-provisioner-cfg-csi-mock-volumes-3237 Mar 22 00:10:36.895: INFO: creating *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-provisioner-role-cfg Mar 22 00:10:36.933: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-resizer Mar 22 00:10:37.279: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3237 Mar 22 00:10:37.279: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3237 Mar 22 00:10:37.405: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3237 Mar 22 00:10:37.443: INFO: creating *v1.Role: csi-mock-volumes-3237-7895/external-resizer-cfg-csi-mock-volumes-3237 Mar 22 00:10:37.470: INFO: creating *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-resizer-role-cfg Mar 22 00:10:37.527: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-snapshotter Mar 22 00:10:37.549: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3237 Mar 22 00:10:37.549: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3237 Mar 22 00:10:37.588: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3237 Mar 22 00:10:37.633: INFO: creating *v1.Role: csi-mock-volumes-3237-7895/external-snapshotter-leaderelection-csi-mock-volumes-3237 Mar 22 00:10:37.713: INFO: creating *v1.RoleBinding: csi-mock-volumes-3237-7895/external-snapshotter-leaderelection Mar 22 00:10:37.722: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-mock Mar 22 00:10:37.743: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3237 Mar 22 00:10:37.758: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3237 Mar 22 00:10:37.764: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3237 Mar 22 00:10:37.770: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3237 Mar 22 00:10:37.791: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3237 Mar 22 00:10:37.806: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3237 Mar 22 00:10:37.842: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3237 Mar 22 00:10:37.860: INFO: creating *v1.StatefulSet: csi-mock-volumes-3237-7895/csi-mockplugin Mar 22 00:10:37.866: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3237 Mar 22 00:10:37.884: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3237" Mar 22 00:10:37.958: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3237 to register on node latest-worker2 I0322 00:10:52.485541 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0322 00:10:52.487572 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3237","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:10:52.531396 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0322 00:10:52.582925 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0322 00:10:52.584471 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3237","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:10:53.089053 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3237"},"Error":"","FullError":null} STEP: Creating pod Mar 22 00:10:54.992: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:10:55.000: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-wlp4z] to have phase Bound Mar 22 00:10:55.004: INFO: PersistentVolumeClaim pvc-wlp4z found but phase is Pending instead of Bound. I0322 00:10:55.008234 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0322 00:10:55.010225 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5"}}},"Error":"","FullError":null} Mar 22 00:10:57.271: INFO: PersistentVolumeClaim pvc-wlp4z found and phase=Bound (2.271275824s) I0322 00:10:57.666244 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:10:57.673: INFO: >>> kubeConfig: /root/.kube/config I0322 00:10:57.773040 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5","storage.kubernetes.io/csiProvisionerIdentity":"1616371852625-8081-csi-mock-csi-mock-volumes-3237"}},"Response":{},"Error":"","FullError":null} I0322 00:10:57.777902 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:10:57.780: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:57.877: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:10:57.977: INFO: >>> kubeConfig: /root/.kube/config I0322 00:10:58.062082 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5/globalmount","target_path":"/var/lib/kubelet/pods/61a1c68e-f207-4c5a-a5d5-fa1bf0371164/volumes/kubernetes.io~csi/pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5","storage.kubernetes.io/csiProvisionerIdentity":"1616371852625-8081-csi-mock-csi-mock-volumes-3237"}},"Response":{},"Error":"","FullError":null} Mar 22 00:11:01.423: INFO: Deleting pod "pvc-volume-tester-kpvp5" in namespace "csi-mock-volumes-3237" Mar 22 00:11:01.427: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kpvp5" to be fully deleted Mar 22 00:11:05.486: INFO: >>> kubeConfig: /root/.kube/config I0322 00:11:05.669167 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/61a1c68e-f207-4c5a-a5d5-fa1bf0371164/volumes/kubernetes.io~csi/pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5/mount"},"Response":{},"Error":"","FullError":null} I0322 00:11:05.689746 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0322 00:11:05.692537 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5/globalmount"},"Response":{},"Error":"","FullError":null} I0322 00:11:17.892546 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 22 00:11:18.553: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988460", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004da0e70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004da0e88)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001374ef0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001374f00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:11:18.553: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988461", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3237"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004da0f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004da0fa8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004da0fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004da0fd8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001374fd0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001374fe0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:11:18.554: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988469", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3237"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70cf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70d08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70d20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70d38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", StorageClassName:(*string)(0xc00142a6f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00142a700), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:11:18.554: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988470", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3237"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70d68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70d80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70d98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70db0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", StorageClassName:(*string)(0xc00142a730), VolumeMode:(*v1.PersistentVolumeMode)(0xc00142a740), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:11:18.554: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988943", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc005b70de0), DeletionGracePeriodSeconds:(*int64)(0xc003b7aed8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3237"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70df8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70e10)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70e28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70e40)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", StorageClassName:(*string)(0xc00142a780), VolumeMode:(*v1.PersistentVolumeMode)(0xc00142a790), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:11:18.554: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-wlp4z", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3237", SelfLink:"", UID:"3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", ResourceVersion:"6988957", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751968654, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc005b70eb8), DeletionGracePeriodSeconds:(*int64)(0xc003b7af88), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3237"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70f00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70fa8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005b70fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005b70fd8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3f7fd59b-bf2c-41cd-a00b-c6351a2fa7c5", StorageClassName:(*string)(0xc00142a7d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00142a7e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-kpvp5 Mar 22 00:11:18.554: INFO: Deleting pod "pvc-volume-tester-kpvp5" in namespace "csi-mock-volumes-3237" STEP: Deleting claim pvc-wlp4z STEP: Deleting storageclass csi-mock-volumes-3237-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3237 STEP: Waiting for namespaces [csi-mock-volumes-3237] to vanish STEP: uninstalling csi mock driver Mar 22 00:11:29.430: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-attacher Mar 22 00:11:29.434: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3237 Mar 22 00:11:29.446: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3237 Mar 22 00:11:29.486: INFO: deleting *v1.Role: csi-mock-volumes-3237-7895/external-attacher-cfg-csi-mock-volumes-3237 Mar 22 00:11:29.492: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-attacher-role-cfg Mar 22 00:11:29.513: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-provisioner Mar 22 00:11:29.562: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3237 Mar 22 00:11:29.586: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3237 Mar 22 00:11:29.594: INFO: deleting *v1.Role: csi-mock-volumes-3237-7895/external-provisioner-cfg-csi-mock-volumes-3237 Mar 22 00:11:29.618: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-provisioner-role-cfg Mar 22 00:11:29.685: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-resizer Mar 22 00:11:29.701: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3237 Mar 22 00:11:29.707: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3237 Mar 22 00:11:29.982: INFO: deleting *v1.Role: csi-mock-volumes-3237-7895/external-resizer-cfg-csi-mock-volumes-3237 Mar 22 00:11:29.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3237-7895/csi-resizer-role-cfg Mar 22 00:11:30.016: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-snapshotter Mar 22 00:11:30.140: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3237 Mar 22 00:11:30.219: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3237 Mar 22 00:11:30.227: INFO: deleting *v1.Role: csi-mock-volumes-3237-7895/external-snapshotter-leaderelection-csi-mock-volumes-3237 Mar 22 00:11:30.804: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3237-7895/external-snapshotter-leaderelection Mar 22 00:11:31.070: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3237-7895/csi-mock Mar 22 00:11:31.095: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3237 Mar 22 00:11:31.153: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3237 Mar 22 00:11:31.280: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3237 Mar 22 00:11:31.471: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3237 Mar 22 00:11:31.521: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3237 Mar 22 00:11:31.651: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3237 Mar 22 00:11:31.676: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3237 Mar 22 00:11:31.884: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3237-7895/csi-mockplugin Mar 22 00:11:31.934: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3237 STEP: deleting the driver namespace: csi-mock-volumes-3237-7895 STEP: Waiting for namespaces [csi-mock-volumes-3237-7895] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:18.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:103.099 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":133,"completed":35,"skipped":1856,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:18.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-fc56a5e0-aa19-4283-a13e-505023d24700 STEP: Creating a pod to test consume configMaps Mar 22 00:12:19.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5" in namespace "configmap-8092" to be "Succeeded or Failed" Mar 22 00:12:19.901: INFO: Pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5": Phase="Pending", Reason="", readiness=false. Elapsed: 133.370911ms Mar 22 00:12:21.906: INFO: Pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138346098s Mar 22 00:12:23.918: INFO: Pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151175088s Mar 22 00:12:25.948: INFO: Pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180822358s STEP: Saw pod success Mar 22 00:12:25.948: INFO: Pod "pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5" satisfied condition "Succeeded or Failed" Mar 22 00:12:25.955: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5 container agnhost-container: STEP: delete the pod Mar 22 00:12:26.162: INFO: Waiting for pod pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5 to disappear Mar 22 00:12:26.204: INFO: Pod pod-configmaps-a1cdd0ab-2a3f-45c9-a386-3ec6a83685c5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:26.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8092" for this suite. • [SLOW TEST:7.572 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":36,"skipped":1868,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:26.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1" Mar 22 00:12:31.514: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1 && dd if=/dev/zero of=/tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1/file] Namespace:persistent-local-volumes-test-1666 PodName:hostexec-latest-worker2-69bvl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:12:31.514: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:31.876: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1666 PodName:hostexec-latest-worker2-69bvl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:12:31.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:12:32.074: INFO: Creating a PV followed by a PVC Mar 22 00:12:32.259: INFO: Waiting for PV local-pvb42sp to bind to PVC pvc-z2ttz Mar 22 00:12:32.259: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-z2ttz] to have phase Bound Mar 22 00:12:32.397: INFO: PersistentVolumeClaim pvc-z2ttz found but phase is Pending instead of Bound. Mar 22 00:12:34.473: INFO: PersistentVolumeClaim pvc-z2ttz found and phase=Bound (2.213891311s) Mar 22 00:12:34.473: INFO: Waiting up to 3m0s for PersistentVolume local-pvb42sp to have phase Bound Mar 22 00:12:34.589: INFO: PersistentVolume local-pvb42sp found and phase=Bound (116.43095ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:12:40.902: INFO: pod "pod-adfe6fcf-f7cb-4571-8b26-f7553887291e" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:12:40.902: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1666 PodName:pod-adfe6fcf-f7cb-4571-8b26-f7553887291e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:12:40.902: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:40.990: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000054 seconds, 325.5KB/s", err: Mar 22 00:12:40.990: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1666 PodName:pod-adfe6fcf-f7cb-4571-8b26-f7553887291e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:12:40.990: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:41.080: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:12:47.854: INFO: pod "pod-bccc2d8e-5dff-4182-a62b-b28c6e0f9c09" created on Node "latest-worker2" Mar 22 00:12:47.854: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1666 PodName:pod-bccc2d8e-5dff-4182-a62b-b28c6e0f9c09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:12:47.854: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:47.954: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Mar 22 00:12:47.954: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1666 PodName:pod-bccc2d8e-5dff-4182-a62b-b28c6e0f9c09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:12:47.954: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:48.086: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.017244 seconds, 637B/s", err: STEP: Reading in pod1 Mar 22 00:12:48.086: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-1666 PodName:pod-adfe6fcf-f7cb-4571-8b26-f7553887291e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:12:48.086: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:12:48.196: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-adfe6fcf-f7cb-4571-8b26-f7553887291e in namespace persistent-local-volumes-test-1666 STEP: Deleting pod2 STEP: Deleting pod pod-bccc2d8e-5dff-4182-a62b-b28c6e0f9c09 in namespace persistent-local-volumes-test-1666 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:12:48.233: INFO: Deleting PersistentVolumeClaim "pvc-z2ttz" Mar 22 00:12:48.263: INFO: Deleting PersistentVolume "local-pvb42sp" Mar 22 00:12:48.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1666 PodName:hostexec-latest-worker2-69bvl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:12:48.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1/file Mar 22 00:12:48.401: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1666 PodName:hostexec-latest-worker2-69bvl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:12:48.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1 Mar 22 00:12:48.492: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cfb2cd62-e40c-4dc9-b21b-40a5c622a2c1] Namespace:persistent-local-volumes-test-1666 PodName:hostexec-latest-worker2-69bvl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:12:48.492: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:48.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1666" for this suite. • [SLOW TEST:22.555 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":37,"skipped":1895,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:48.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:12:49.262: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:49.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8015" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.371 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:49.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-6141 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:12:49.831: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-attacher Mar 22 00:12:49.835: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6141 Mar 22 00:12:49.835: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6141 Mar 22 00:12:49.844: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6141 Mar 22 00:12:49.850: INFO: creating *v1.Role: csi-mock-volumes-6141-464/external-attacher-cfg-csi-mock-volumes-6141 Mar 22 00:12:49.856: INFO: creating *v1.RoleBinding: csi-mock-volumes-6141-464/csi-attacher-role-cfg Mar 22 00:12:49.878: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-provisioner Mar 22 00:12:49.904: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6141 Mar 22 00:12:49.904: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6141 Mar 22 00:12:50.195: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6141 Mar 22 00:12:50.404: INFO: creating *v1.Role: csi-mock-volumes-6141-464/external-provisioner-cfg-csi-mock-volumes-6141 Mar 22 00:12:50.416: INFO: creating *v1.RoleBinding: csi-mock-volumes-6141-464/csi-provisioner-role-cfg Mar 22 00:12:50.430: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-resizer Mar 22 00:12:50.434: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6141 Mar 22 00:12:50.434: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6141 Mar 22 00:12:50.453: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6141 Mar 22 00:12:50.470: INFO: creating *v1.Role: csi-mock-volumes-6141-464/external-resizer-cfg-csi-mock-volumes-6141 Mar 22 00:12:50.542: INFO: creating *v1.RoleBinding: csi-mock-volumes-6141-464/csi-resizer-role-cfg Mar 22 00:12:50.548: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-snapshotter Mar 22 00:12:50.553: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6141 Mar 22 00:12:50.553: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6141 Mar 22 00:12:50.559: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6141 Mar 22 00:12:50.565: INFO: creating *v1.Role: csi-mock-volumes-6141-464/external-snapshotter-leaderelection-csi-mock-volumes-6141 Mar 22 00:12:50.617: INFO: creating *v1.RoleBinding: csi-mock-volumes-6141-464/external-snapshotter-leaderelection Mar 22 00:12:50.667: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-mock Mar 22 00:12:50.681: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6141 Mar 22 00:12:50.721: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6141 Mar 22 00:12:50.727: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6141 Mar 22 00:12:50.733: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6141 Mar 22 00:12:50.752: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6141 Mar 22 00:12:50.792: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6141 Mar 22 00:12:50.806: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6141 Mar 22 00:12:50.836: INFO: creating *v1.StatefulSet: csi-mock-volumes-6141-464/csi-mockplugin Mar 22 00:12:50.860: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6141 Mar 22 00:12:50.880: INFO: creating *v1.StatefulSet: csi-mock-volumes-6141-464/csi-mockplugin-attacher Mar 22 00:12:50.931: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6141" Mar 22 00:12:50.952: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6141 to register on node latest-worker2 STEP: Creating pod Mar 22 00:13:05.541: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 22 00:13:27.731: INFO: Deleting pod "pvc-volume-tester-vqcjk" in namespace "csi-mock-volumes-6141" Mar 22 00:13:27.736: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vqcjk" to be fully deleted STEP: Deleting pod pvc-volume-tester-vqcjk Mar 22 00:14:25.774: INFO: Deleting pod "pvc-volume-tester-vqcjk" in namespace "csi-mock-volumes-6141" STEP: Deleting claim pvc-hk7xd Mar 22 00:14:25.786: INFO: Waiting up to 2m0s for PersistentVolume pvc-f6899c3e-bf32-41be-a9ef-676c4edd13eb to get deleted Mar 22 00:14:25.805: INFO: PersistentVolume pvc-f6899c3e-bf32-41be-a9ef-676c4edd13eb found and phase=Bound (18.522398ms) Mar 22 00:14:27.810: INFO: PersistentVolume pvc-f6899c3e-bf32-41be-a9ef-676c4edd13eb was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-6141 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6141 STEP: Waiting for namespaces [csi-mock-volumes-6141] to vanish STEP: uninstalling csi mock driver Mar 22 00:14:33.863: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-attacher Mar 22 00:14:33.870: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6141 Mar 22 00:14:33.932: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6141 Mar 22 00:14:33.965: INFO: deleting *v1.Role: csi-mock-volumes-6141-464/external-attacher-cfg-csi-mock-volumes-6141 Mar 22 00:14:33.972: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6141-464/csi-attacher-role-cfg Mar 22 00:14:34.024: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-provisioner Mar 22 00:14:34.054: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6141 Mar 22 00:14:34.067: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6141 Mar 22 00:14:34.079: INFO: deleting *v1.Role: csi-mock-volumes-6141-464/external-provisioner-cfg-csi-mock-volumes-6141 Mar 22 00:14:34.090: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6141-464/csi-provisioner-role-cfg Mar 22 00:14:34.102: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-resizer Mar 22 00:14:34.125: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6141 Mar 22 00:14:34.133: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6141 Mar 22 00:14:34.144: INFO: deleting *v1.Role: csi-mock-volumes-6141-464/external-resizer-cfg-csi-mock-volumes-6141 Mar 22 00:14:34.171: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6141-464/csi-resizer-role-cfg Mar 22 00:14:34.176: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-snapshotter Mar 22 00:14:34.188: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6141 Mar 22 00:14:34.197: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6141 Mar 22 00:14:34.205: INFO: deleting *v1.Role: csi-mock-volumes-6141-464/external-snapshotter-leaderelection-csi-mock-volumes-6141 Mar 22 00:14:34.211: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6141-464/external-snapshotter-leaderelection Mar 22 00:14:34.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6141-464/csi-mock Mar 22 00:14:34.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6141 Mar 22 00:14:34.257: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6141 Mar 22 00:14:34.265: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6141 Mar 22 00:14:34.291: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6141 Mar 22 00:14:34.296: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6141 Mar 22 00:14:34.307: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6141 Mar 22 00:14:34.313: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6141 Mar 22 00:14:34.319: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6141-464/csi-mockplugin Mar 22 00:14:34.325: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6141 Mar 22 00:14:34.346: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6141-464/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6141-464 STEP: Waiting for namespaces [csi-mock-volumes-6141-464] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:15:26.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:157.080 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":133,"completed":38,"skipped":1921,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} S ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:15:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e" Mar 22 00:15:30.528: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e && dd if=/dev/zero of=/tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e/file] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:30.528: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:30.732: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:30.732: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:30.843: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e && chmod o+rwx /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:30.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:15:31.271: INFO: Creating a PV followed by a PVC Mar 22 00:15:31.289: INFO: Waiting for PV local-pvhv5tx to bind to PVC pvc-ckmqc Mar 22 00:15:31.290: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ckmqc] to have phase Bound Mar 22 00:15:31.352: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:33.357: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:35.362: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:37.367: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:39.371: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:41.375: INFO: PersistentVolumeClaim pvc-ckmqc found but phase is Pending instead of Bound. Mar 22 00:15:43.379: INFO: PersistentVolumeClaim pvc-ckmqc found and phase=Bound (12.089733686s) Mar 22 00:15:43.379: INFO: Waiting up to 3m0s for PersistentVolume local-pvhv5tx to have phase Bound Mar 22 00:15:43.383: INFO: PersistentVolume local-pvhv5tx found and phase=Bound (3.246948ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:15:47.622: INFO: pod "pod-42e0c84f-0fa1-41b5-96e3-916cff7d0154" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:15:47.622: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9218 PodName:pod-42e0c84f-0fa1-41b5-96e3-916cff7d0154 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:15:47.622: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:47.728: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:15:47.728: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9218 PodName:pod-42e0c84f-0fa1-41b5-96e3-916cff7d0154 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:15:47.728: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:47.824: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:15:51.899: INFO: pod "pod-b96f2574-848a-45f2-971f-ebee93d5eacb" created on Node "latest-worker" Mar 22 00:15:51.899: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9218 PodName:pod-b96f2574-848a-45f2-971f-ebee93d5eacb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:15:51.899: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:52.022: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:15:52.022: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9218 PodName:pod-b96f2574-848a-45f2-971f-ebee93d5eacb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:15:52.022: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:52.123: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:15:52.123: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9218 PodName:pod-42e0c84f-0fa1-41b5-96e3-916cff7d0154 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:15:52.123: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:52.240: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-42e0c84f-0fa1-41b5-96e3-916cff7d0154 in namespace persistent-local-volumes-test-9218 STEP: Deleting pod2 STEP: Deleting pod pod-b96f2574-848a-45f2-971f-ebee93d5eacb in namespace persistent-local-volumes-test-9218 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:15:52.286: INFO: Deleting PersistentVolumeClaim "pvc-ckmqc" Mar 22 00:15:52.343: INFO: Deleting PersistentVolume "local-pvhv5tx" Mar 22 00:15:52.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:52.375: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:52.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:52.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e/file Mar 22 00:15:52.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:52.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e Mar 22 00:15:52.770: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8b640e68-998c-40bb-a4dc-f6b3de65c82e] Namespace:persistent-local-volumes-test-9218 PodName:hostexec-latest-worker-x7vmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:52.770: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:15:52.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9218" for this suite. • [SLOW TEST:26.851 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":39,"skipped":1922,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:15:53.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:15:53.442: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:15:53.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3312" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.351 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:15:53.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 22 00:15:57.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7939 PodName:hostexec-latest-worker-7c8jp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:15:57.828: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:15:57.971: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 22 00:15:57.971: INFO: exec latest-worker: stdout: "0\n" Mar 22 00:15:57.971: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 22 00:15:57.971: INFO: exec latest-worker: exit code: 0 Mar 22 00:15:57.971: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:15:57.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7939" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.404 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:15:57.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c" Mar 22 00:16:02.128: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c && dd if=/dev/zero of=/tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c/file] Namespace:persistent-local-volumes-test-204 PodName:hostexec-latest-worker2-hrstx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:02.128: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:02.324: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-204 PodName:hostexec-latest-worker2-hrstx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:02.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:16:02.438: INFO: Creating a PV followed by a PVC Mar 22 00:16:02.453: INFO: Waiting for PV local-pvpb7pp to bind to PVC pvc-t59sq Mar 22 00:16:02.454: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-t59sq] to have phase Bound Mar 22 00:16:02.459: INFO: PersistentVolumeClaim pvc-t59sq found but phase is Pending instead of Bound. Mar 22 00:16:04.464: INFO: PersistentVolumeClaim pvc-t59sq found but phase is Pending instead of Bound. Mar 22 00:16:06.469: INFO: PersistentVolumeClaim pvc-t59sq found but phase is Pending instead of Bound. Mar 22 00:16:08.474: INFO: PersistentVolumeClaim pvc-t59sq found but phase is Pending instead of Bound. Mar 22 00:16:10.479: INFO: PersistentVolumeClaim pvc-t59sq found but phase is Pending instead of Bound. Mar 22 00:16:12.484: INFO: PersistentVolumeClaim pvc-t59sq found and phase=Bound (10.03044687s) Mar 22 00:16:12.484: INFO: Waiting up to 3m0s for PersistentVolume local-pvpb7pp to have phase Bound Mar 22 00:16:12.487: INFO: PersistentVolume local-pvpb7pp found and phase=Bound (2.934608ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Mar 22 00:16:12.493: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:16:12.494: INFO: Deleting PersistentVolumeClaim "pvc-t59sq" Mar 22 00:16:12.500: INFO: Deleting PersistentVolume "local-pvpb7pp" Mar 22 00:16:12.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-204 PodName:hostexec-latest-worker2-hrstx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:12.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c/file Mar 22 00:16:12.629: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-204 PodName:hostexec-latest-worker2-hrstx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:12.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c Mar 22 00:16:12.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a76dd6bc-21a9-4724-9fd1-9d6d38295f0c] Namespace:persistent-local-volumes-test-204 PodName:hostexec-latest-worker2-hrstx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:12.765: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:16:12.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-204" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [14.925 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:16:12.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:16:15.116: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c1927f01-57d7-4404-9d33-868319aa4ded] Namespace:persistent-local-volumes-test-6489 PodName:hostexec-latest-worker-w2g5s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:15.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:16:15.229: INFO: Creating a PV followed by a PVC Mar 22 00:16:15.263: INFO: Waiting for PV local-pvs55vn to bind to PVC pvc-28p4m Mar 22 00:16:15.263: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-28p4m] to have phase Bound Mar 22 00:16:15.286: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:17.291: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:19.296: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:21.301: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:23.307: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:25.312: INFO: PersistentVolumeClaim pvc-28p4m found but phase is Pending instead of Bound. Mar 22 00:16:27.316: INFO: PersistentVolumeClaim pvc-28p4m found and phase=Bound (12.052335596s) Mar 22 00:16:27.316: INFO: Waiting up to 3m0s for PersistentVolume local-pvs55vn to have phase Bound Mar 22 00:16:27.318: INFO: PersistentVolume local-pvs55vn found and phase=Bound (2.627756ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:16:31.364: INFO: pod "pod-58eb30d2-1f0f-4d97-b9a3-84f0188321a4" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:16:31.364: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6489 PodName:pod-58eb30d2-1f0f-4d97-b9a3-84f0188321a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:31.364: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:31.470: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:16:31.471: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6489 PodName:pod-58eb30d2-1f0f-4d97-b9a3-84f0188321a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:31.471: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:31.576: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:16:31.576: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c1927f01-57d7-4404-9d33-868319aa4ded > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6489 PodName:pod-58eb30d2-1f0f-4d97-b9a3-84f0188321a4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:31.576: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:31.685: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c1927f01-57d7-4404-9d33-868319aa4ded > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-58eb30d2-1f0f-4d97-b9a3-84f0188321a4 in namespace persistent-local-volumes-test-6489 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:16:31.693: INFO: Deleting PersistentVolumeClaim "pvc-28p4m" Mar 22 00:16:31.719: INFO: Deleting PersistentVolume "local-pvs55vn" STEP: Removing the test directory Mar 22 00:16:31.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c1927f01-57d7-4404-9d33-868319aa4ded] Namespace:persistent-local-volumes-test-6489 PodName:hostexec-latest-worker-w2g5s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:31.737: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:16:31.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6489" for this suite. • [SLOW TEST:18.990 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":40,"skipped":2032,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:16:31.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:16:36.148: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend && mount --bind /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend && ln -s /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1] Namespace:persistent-local-volumes-test-5735 PodName:hostexec-latest-worker2-c5c29 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:36.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:16:36.280: INFO: Creating a PV followed by a PVC Mar 22 00:16:36.296: INFO: Waiting for PV local-pvz62lr to bind to PVC pvc-xdn9s Mar 22 00:16:36.296: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xdn9s] to have phase Bound Mar 22 00:16:36.302: INFO: PersistentVolumeClaim pvc-xdn9s found but phase is Pending instead of Bound. Mar 22 00:16:38.315: INFO: PersistentVolumeClaim pvc-xdn9s found and phase=Bound (2.018474691s) Mar 22 00:16:38.315: INFO: Waiting up to 3m0s for PersistentVolume local-pvz62lr to have phase Bound Mar 22 00:16:38.318: INFO: PersistentVolume local-pvz62lr found and phase=Bound (3.117627ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:16:42.366: INFO: pod "pod-33531aad-089b-483a-bba1-ed836e747059" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:16:42.366: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5735 PodName:pod-33531aad-089b-483a-bba1-ed836e747059 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:42.366: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:42.460: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:16:42.460: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5735 PodName:pod-33531aad-089b-483a-bba1-ed836e747059 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:42.460: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:42.564: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-33531aad-089b-483a-bba1-ed836e747059 in namespace persistent-local-volumes-test-5735 STEP: Creating pod2 STEP: Creating a pod Mar 22 00:16:46.636: INFO: pod "pod-13d4fab5-58d6-4d55-bed9-c629066b9178" created on Node "latest-worker2" STEP: Reading in pod2 Mar 22 00:16:46.636: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5735 PodName:pod-13d4fab5-58d6-4d55-bed9-c629066b9178 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:16:46.636: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:16:46.730: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-13d4fab5-58d6-4d55-bed9-c629066b9178 in namespace persistent-local-volumes-test-5735 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:16:46.734: INFO: Deleting PersistentVolumeClaim "pvc-xdn9s" Mar 22 00:16:46.771: INFO: Deleting PersistentVolume "local-pvz62lr" STEP: Removing the test directory Mar 22 00:16:46.791: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1 && umount /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend && rm -r /tmp/local-volume-test-63e59fdf-fd85-4594-a1f3-8a7bfc6d38f1-backend] Namespace:persistent-local-volumes-test-5735 PodName:hostexec-latest-worker2-c5c29 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:16:46.791: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:16:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5735" for this suite. • [SLOW TEST:15.099 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":41,"skipped":2044,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:16:47.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-7832 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:16:47.258: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-attacher Mar 22 00:16:47.261: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7832 Mar 22 00:16:47.261: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7832 Mar 22 00:16:47.266: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7832 Mar 22 00:16:47.287: INFO: creating *v1.Role: csi-mock-volumes-7832-8602/external-attacher-cfg-csi-mock-volumes-7832 Mar 22 00:16:47.342: INFO: creating *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-attacher-role-cfg Mar 22 00:16:47.347: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-provisioner Mar 22 00:16:47.370: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7832 Mar 22 00:16:47.370: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7832 Mar 22 00:16:47.388: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7832 Mar 22 00:16:47.394: INFO: creating *v1.Role: csi-mock-volumes-7832-8602/external-provisioner-cfg-csi-mock-volumes-7832 Mar 22 00:16:47.400: INFO: creating *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-provisioner-role-cfg Mar 22 00:16:47.424: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-resizer Mar 22 00:16:47.461: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7832 Mar 22 00:16:47.461: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7832 Mar 22 00:16:47.465: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7832 Mar 22 00:16:47.478: INFO: creating *v1.Role: csi-mock-volumes-7832-8602/external-resizer-cfg-csi-mock-volumes-7832 Mar 22 00:16:47.502: INFO: creating *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-resizer-role-cfg Mar 22 00:16:47.532: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-snapshotter Mar 22 00:16:47.544: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7832 Mar 22 00:16:47.544: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7832 Mar 22 00:16:47.550: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7832 Mar 22 00:16:47.556: INFO: creating *v1.Role: csi-mock-volumes-7832-8602/external-snapshotter-leaderelection-csi-mock-volumes-7832 Mar 22 00:16:47.587: INFO: creating *v1.RoleBinding: csi-mock-volumes-7832-8602/external-snapshotter-leaderelection Mar 22 00:16:47.598: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-mock Mar 22 00:16:47.622: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7832 Mar 22 00:16:47.640: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7832 Mar 22 00:16:47.646: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7832 Mar 22 00:16:47.652: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7832 Mar 22 00:16:47.682: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7832 Mar 22 00:16:47.730: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7832 Mar 22 00:16:47.740: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7832 Mar 22 00:16:47.774: INFO: creating *v1.StatefulSet: csi-mock-volumes-7832-8602/csi-mockplugin Mar 22 00:16:47.790: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7832 Mar 22 00:16:47.802: INFO: creating *v1.StatefulSet: csi-mock-volumes-7832-8602/csi-mockplugin-resizer Mar 22 00:16:47.862: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7832" Mar 22 00:16:47.904: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7832 to register on node latest-worker STEP: Creating pod Mar 22 00:16:57.648: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:16:57.674: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-zjzmz] to have phase Bound Mar 22 00:16:57.681: INFO: PersistentVolumeClaim pvc-zjzmz found but phase is Pending instead of Bound. Mar 22 00:16:59.685: INFO: PersistentVolumeClaim pvc-zjzmz found and phase=Bound (2.011206386s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 22 00:17:05.769: INFO: Deleting pod "pvc-volume-tester-cqm2b" in namespace "csi-mock-volumes-7832" Mar 22 00:17:05.776: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cqm2b" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-cqm2b Mar 22 00:17:15.802: INFO: Deleting pod "pvc-volume-tester-cqm2b" in namespace "csi-mock-volumes-7832" STEP: Deleting pod pvc-volume-tester-h2cbt Mar 22 00:17:15.837: INFO: Deleting pod "pvc-volume-tester-h2cbt" in namespace "csi-mock-volumes-7832" Mar 22 00:17:15.901: INFO: Wait up to 5m0s for pod "pvc-volume-tester-h2cbt" to be fully deleted STEP: Deleting claim pvc-zjzmz Mar 22 00:17:25.948: INFO: Waiting up to 2m0s for PersistentVolume pvc-3a452736-3db6-471a-91ec-b98241052cd7 to get deleted Mar 22 00:17:25.972: INFO: PersistentVolume pvc-3a452736-3db6-471a-91ec-b98241052cd7 found and phase=Bound (24.341246ms) Mar 22 00:17:27.977: INFO: PersistentVolume pvc-3a452736-3db6-471a-91ec-b98241052cd7 was removed STEP: Deleting storageclass csi-mock-volumes-7832-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7832 STEP: Waiting for namespaces [csi-mock-volumes-7832] to vanish STEP: uninstalling csi mock driver Mar 22 00:17:34.068: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-attacher Mar 22 00:17:34.076: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7832 Mar 22 00:17:34.099: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7832 Mar 22 00:17:34.120: INFO: deleting *v1.Role: csi-mock-volumes-7832-8602/external-attacher-cfg-csi-mock-volumes-7832 Mar 22 00:17:34.162: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-attacher-role-cfg Mar 22 00:17:34.168: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-provisioner Mar 22 00:17:34.182: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7832 Mar 22 00:17:34.187: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7832 Mar 22 00:17:34.198: INFO: deleting *v1.Role: csi-mock-volumes-7832-8602/external-provisioner-cfg-csi-mock-volumes-7832 Mar 22 00:17:34.205: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-provisioner-role-cfg Mar 22 00:17:34.229: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-resizer Mar 22 00:17:34.260: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7832 Mar 22 00:17:34.297: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7832 Mar 22 00:17:34.312: INFO: deleting *v1.Role: csi-mock-volumes-7832-8602/external-resizer-cfg-csi-mock-volumes-7832 Mar 22 00:17:34.319: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7832-8602/csi-resizer-role-cfg Mar 22 00:17:34.324: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-snapshotter Mar 22 00:17:34.330: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7832 Mar 22 00:17:34.341: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7832 Mar 22 00:17:34.350: INFO: deleting *v1.Role: csi-mock-volumes-7832-8602/external-snapshotter-leaderelection-csi-mock-volumes-7832 Mar 22 00:17:34.354: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7832-8602/external-snapshotter-leaderelection Mar 22 00:17:34.374: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7832-8602/csi-mock Mar 22 00:17:34.385: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7832 Mar 22 00:17:34.391: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7832 Mar 22 00:17:34.422: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7832 Mar 22 00:17:34.433: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7832 Mar 22 00:17:34.438: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7832 Mar 22 00:17:34.444: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7832 Mar 22 00:17:34.451: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7832 Mar 22 00:17:34.456: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7832-8602/csi-mockplugin Mar 22 00:17:34.463: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7832 Mar 22 00:17:34.507: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7832-8602/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-7832-8602 STEP: Waiting for namespaces [csi-mock-volumes-7832-8602] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:18:50.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:123.606 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":133,"completed":42,"skipped":2059,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:18:50.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-4769 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:18:50.830: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-attacher Mar 22 00:18:50.834: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4769 Mar 22 00:18:50.834: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4769 Mar 22 00:18:50.852: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4769 Mar 22 00:18:50.876: INFO: creating *v1.Role: csi-mock-volumes-4769-1869/external-attacher-cfg-csi-mock-volumes-4769 Mar 22 00:18:50.906: INFO: creating *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-attacher-role-cfg Mar 22 00:18:50.942: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-provisioner Mar 22 00:18:50.946: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4769 Mar 22 00:18:50.946: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4769 Mar 22 00:18:50.966: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4769 Mar 22 00:18:51.024: INFO: creating *v1.Role: csi-mock-volumes-4769-1869/external-provisioner-cfg-csi-mock-volumes-4769 Mar 22 00:18:51.080: INFO: creating *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-provisioner-role-cfg Mar 22 00:18:51.091: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-resizer Mar 22 00:18:51.097: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4769 Mar 22 00:18:51.097: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4769 Mar 22 00:18:51.104: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4769 Mar 22 00:18:51.109: INFO: creating *v1.Role: csi-mock-volumes-4769-1869/external-resizer-cfg-csi-mock-volumes-4769 Mar 22 00:18:51.115: INFO: creating *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-resizer-role-cfg Mar 22 00:18:51.138: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-snapshotter Mar 22 00:18:51.163: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4769 Mar 22 00:18:51.163: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4769 Mar 22 00:18:51.175: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4769 Mar 22 00:18:51.199: INFO: creating *v1.Role: csi-mock-volumes-4769-1869/external-snapshotter-leaderelection-csi-mock-volumes-4769 Mar 22 00:18:51.211: INFO: creating *v1.RoleBinding: csi-mock-volumes-4769-1869/external-snapshotter-leaderelection Mar 22 00:18:51.224: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-mock Mar 22 00:18:51.229: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4769 Mar 22 00:18:51.235: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4769 Mar 22 00:18:51.276: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4769 Mar 22 00:18:51.331: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4769 Mar 22 00:18:51.337: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4769 Mar 22 00:18:51.347: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4769 Mar 22 00:18:51.350: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4769 Mar 22 00:18:51.355: INFO: creating *v1.StatefulSet: csi-mock-volumes-4769-1869/csi-mockplugin Mar 22 00:18:51.378: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4769 Mar 22 00:18:51.415: INFO: creating *v1.StatefulSet: csi-mock-volumes-4769-1869/csi-mockplugin-attacher Mar 22 00:18:51.470: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4769" Mar 22 00:18:51.475: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4769 to register on node latest-worker2 STEP: Creating pod Mar 22 00:19:01.104: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:19:01.111: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-2qqhh] to have phase Bound Mar 22 00:19:01.132: INFO: PersistentVolumeClaim pvc-2qqhh found but phase is Pending instead of Bound. Mar 22 00:19:03.137: INFO: PersistentVolumeClaim pvc-2qqhh found and phase=Bound (2.025409839s) STEP: Deleting the previously created pod Mar 22 00:19:11.163: INFO: Deleting pod "pvc-volume-tester-cgpc8" in namespace "csi-mock-volumes-4769" Mar 22 00:19:11.171: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cgpc8" to be fully deleted STEP: Checking CSI driver logs Mar 22 00:19:25.320: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d95345bf-de65-465e-aeb3-5fb2cbaf46fe/volumes/kubernetes.io~csi/pvc-20510de0-a11c-44ee-a8e7-4e7ab5d96b90/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-cgpc8 Mar 22 00:19:25.320: INFO: Deleting pod "pvc-volume-tester-cgpc8" in namespace "csi-mock-volumes-4769" STEP: Deleting claim pvc-2qqhh Mar 22 00:19:25.358: INFO: Waiting up to 2m0s for PersistentVolume pvc-20510de0-a11c-44ee-a8e7-4e7ab5d96b90 to get deleted Mar 22 00:19:25.368: INFO: PersistentVolume pvc-20510de0-a11c-44ee-a8e7-4e7ab5d96b90 found and phase=Bound (10.239849ms) Mar 22 00:19:27.373: INFO: PersistentVolume pvc-20510de0-a11c-44ee-a8e7-4e7ab5d96b90 was removed STEP: Deleting storageclass csi-mock-volumes-4769-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4769 STEP: Waiting for namespaces [csi-mock-volumes-4769] to vanish STEP: uninstalling csi mock driver Mar 22 00:19:33.398: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-attacher Mar 22 00:19:33.403: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4769 Mar 22 00:19:33.429: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4769 Mar 22 00:19:33.441: INFO: deleting *v1.Role: csi-mock-volumes-4769-1869/external-attacher-cfg-csi-mock-volumes-4769 Mar 22 00:19:33.448: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-attacher-role-cfg Mar 22 00:19:33.454: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-provisioner Mar 22 00:19:33.460: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4769 Mar 22 00:19:33.470: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4769 Mar 22 00:19:33.478: INFO: deleting *v1.Role: csi-mock-volumes-4769-1869/external-provisioner-cfg-csi-mock-volumes-4769 Mar 22 00:19:33.502: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-provisioner-role-cfg Mar 22 00:19:33.551: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-resizer Mar 22 00:19:33.562: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4769 Mar 22 00:19:33.568: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4769 Mar 22 00:19:33.574: INFO: deleting *v1.Role: csi-mock-volumes-4769-1869/external-resizer-cfg-csi-mock-volumes-4769 Mar 22 00:19:33.580: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4769-1869/csi-resizer-role-cfg Mar 22 00:19:33.586: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-snapshotter Mar 22 00:19:33.591: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4769 Mar 22 00:19:33.598: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4769 Mar 22 00:19:33.609: INFO: deleting *v1.Role: csi-mock-volumes-4769-1869/external-snapshotter-leaderelection-csi-mock-volumes-4769 Mar 22 00:19:33.616: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4769-1869/external-snapshotter-leaderelection Mar 22 00:19:33.697: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4769-1869/csi-mock Mar 22 00:19:33.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4769 Mar 22 00:19:33.722: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4769 Mar 22 00:19:33.729: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4769 Mar 22 00:19:33.754: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4769 Mar 22 00:19:33.760: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4769 Mar 22 00:19:33.765: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4769 Mar 22 00:19:33.772: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4769 Mar 22 00:19:33.790: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4769-1869/csi-mockplugin Mar 22 00:19:33.820: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4769 Mar 22 00:19:33.850: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4769-1869/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4769-1869 STEP: Waiting for namespaces [csi-mock-volumes-4769-1869] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:99.272 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":133,"completed":43,"skipped":2202,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:29.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-987e3e88-cbfc-42b2-9d60-09f175d67dc6 STEP: Creating a pod to test consume configMaps Mar 22 00:20:30.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833" in namespace "configmap-5595" to be "Succeeded or Failed" Mar 22 00:20:30.040: INFO: Pod "pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833": Phase="Pending", Reason="", readiness=false. Elapsed: 38.03358ms Mar 22 00:20:32.044: INFO: Pod "pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042460482s Mar 22 00:20:34.048: INFO: Pod "pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046731919s STEP: Saw pod success Mar 22 00:20:34.048: INFO: Pod "pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833" satisfied condition "Succeeded or Failed" Mar 22 00:20:34.051: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833 container agnhost-container: STEP: delete the pod Mar 22 00:20:34.089: INFO: Waiting for pod pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833 to disappear Mar 22 00:20:34.107: INFO: Pod pod-configmaps-ff919014-bcd8-4fcc-9e3c-fcbad8e8a833 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:34.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5595" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":44,"skipped":2205,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:34.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3" Mar 22 00:20:38.245: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3 && dd if=/dev/zero of=/tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3/file] Namespace:persistent-local-volumes-test-1315 PodName:hostexec-latest-worker2-8tt2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:20:38.245: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:20:38.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1315 PodName:hostexec-latest-worker2-8tt2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:20:38.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:20:38.546: INFO: Creating a PV followed by a PVC Mar 22 00:20:38.563: INFO: Waiting for PV local-pv6w959 to bind to PVC pvc-wfp6k Mar 22 00:20:38.563: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wfp6k] to have phase Bound Mar 22 00:20:38.581: INFO: PersistentVolumeClaim pvc-wfp6k found but phase is Pending instead of Bound. Mar 22 00:20:40.585: INFO: PersistentVolumeClaim pvc-wfp6k found but phase is Pending instead of Bound. Mar 22 00:20:42.590: INFO: PersistentVolumeClaim pvc-wfp6k found and phase=Bound (4.026559681s) Mar 22 00:20:42.590: INFO: Waiting up to 3m0s for PersistentVolume local-pv6w959 to have phase Bound Mar 22 00:20:42.593: INFO: PersistentVolume local-pv6w959 found and phase=Bound (2.817063ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:20:48.675: INFO: pod "pod-91ce6b3d-9750-4d3a-a329-f895f1d7143f" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:20:48.675: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1315 PodName:pod-91ce6b3d-9750-4d3a-a329-f895f1d7143f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:20:48.675: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:20:48.795: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:20:48.796: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1315 PodName:pod-91ce6b3d-9750-4d3a-a329-f895f1d7143f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:20:48.796: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:20:48.911: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-91ce6b3d-9750-4d3a-a329-f895f1d7143f in namespace persistent-local-volumes-test-1315 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:20:48.927: INFO: Deleting PersistentVolumeClaim "pvc-wfp6k" Mar 22 00:20:48.941: INFO: Deleting PersistentVolume "local-pv6w959" Mar 22 00:20:48.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1315 PodName:hostexec-latest-worker2-8tt2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:20:48.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3/file Mar 22 00:20:49.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1315 PodName:hostexec-latest-worker2-8tt2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:20:49.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3 Mar 22 00:20:49.322: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-df69b7ea-974b-4c08-9ae3-00e3fd816bc3] Namespace:persistent-local-volumes-test-1315 PodName:hostexec-latest-worker2-8tt2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:20:49.322: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:49.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1315" for this suite. • [SLOW TEST:15.544 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":45,"skipped":2354,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv4 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:49.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 22 00:20:49.929: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:49.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-6631" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.274 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:49.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 22 00:20:50.081: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 22 00:20:50.111: INFO: Default storage class: "standard" Mar 22 00:20:50.111: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 22 00:21:00.298: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionz8fl6] to have phase Bound Mar 22 00:21:00.301: INFO: PersistentVolumeClaim pvc-protectionz8fl6 found and phase=Bound (3.045895ms) STEP: Checking that PVC Protection finalizer is set [It] Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 STEP: Deleting the pod using the PVC Mar 22 00:21:00.303: INFO: Deleting pod "pvc-tester-7b94d" in namespace "pvc-protection-1249" Mar 22 00:21:00.308: INFO: Wait up to 5m0s for pod "pvc-tester-7b94d" to be fully deleted STEP: Deleting the PVC Mar 22 00:21:16.331: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionz8fl6 to be removed Mar 22 00:21:18.380: INFO: Claim "pvc-protectionz8fl6" in namespace "pvc-protection-1249" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:21:18.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-1249" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:28.449 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":133,"completed":46,"skipped":2407,"failed":8,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:21:18.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7985 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:21:19.089: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-attacher Mar 22 00:21:19.093: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7985 Mar 22 00:21:19.093: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7985 Mar 22 00:21:19.103: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7985 Mar 22 00:21:19.136: INFO: creating *v1.Role: csi-mock-volumes-7985-6634/external-attacher-cfg-csi-mock-volumes-7985 Mar 22 00:21:19.153: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-attacher-role-cfg Mar 22 00:21:19.220: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-provisioner Mar 22 00:21:19.229: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7985 Mar 22 00:21:19.229: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7985 Mar 22 00:21:19.277: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7985 Mar 22 00:21:19.495: INFO: creating *v1.Role: csi-mock-volumes-7985-6634/external-provisioner-cfg-csi-mock-volumes-7985 Mar 22 00:21:19.699: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-provisioner-role-cfg Mar 22 00:21:19.714: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-resizer Mar 22 00:21:19.735: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7985 Mar 22 00:21:19.736: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7985 Mar 22 00:21:19.774: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7985 Mar 22 00:21:20.035: INFO: creating *v1.Role: csi-mock-volumes-7985-6634/external-resizer-cfg-csi-mock-volumes-7985 Mar 22 00:21:20.125: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-resizer-role-cfg Mar 22 00:21:20.253: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-snapshotter Mar 22 00:21:20.307: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7985 Mar 22 00:21:20.307: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7985 Mar 22 00:21:20.325: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7985 Mar 22 00:21:20.382: INFO: creating *v1.Role: csi-mock-volumes-7985-6634/external-snapshotter-leaderelection-csi-mock-volumes-7985 Mar 22 00:21:20.622: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-6634/external-snapshotter-leaderelection Mar 22 00:21:20.626: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-mock Mar 22 00:21:20.690: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7985 Mar 22 00:21:20.933: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7985 Mar 22 00:21:20.948: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7985 Mar 22 00:21:20.990: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7985 Mar 22 00:21:21.125: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7985 Mar 22 00:21:21.343: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7985 Mar 22 00:21:21.385: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7985 Mar 22 00:21:21.397: INFO: creating *v1.StatefulSet: csi-mock-volumes-7985-6634/csi-mockplugin Mar 22 00:21:21.491: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7985 Mar 22 00:21:21.517: INFO: creating *v1.StatefulSet: csi-mock-volumes-7985-6634/csi-mockplugin-attacher Mar 22 00:21:21.766: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7985" Mar 22 00:21:21.951: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7985 to register on node latest-worker2 Mar 22 00:21:31.902: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-7985 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc003518460>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7985 STEP: Waiting for namespaces [csi-mock-volumes-7985] to vanish STEP: uninstalling csi mock driver Mar 22 00:21:37.913: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-attacher Mar 22 00:21:37.919: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7985 Mar 22 00:21:37.975: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7985 Mar 22 00:21:38.218: INFO: deleting *v1.Role: csi-mock-volumes-7985-6634/external-attacher-cfg-csi-mock-volumes-7985 Mar 22 00:21:38.250: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-attacher-role-cfg Mar 22 00:21:38.262: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-provisioner Mar 22 00:21:38.279: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7985 Mar 22 00:21:38.422: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7985 Mar 22 00:21:38.498: INFO: deleting *v1.Role: csi-mock-volumes-7985-6634/external-provisioner-cfg-csi-mock-volumes-7985 Mar 22 00:21:38.526: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-provisioner-role-cfg Mar 22 00:21:38.542: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-resizer Mar 22 00:21:38.591: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7985 Mar 22 00:21:38.617: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7985 Mar 22 00:21:38.650: INFO: deleting *v1.Role: csi-mock-volumes-7985-6634/external-resizer-cfg-csi-mock-volumes-7985 Mar 22 00:21:38.687: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-6634/csi-resizer-role-cfg Mar 22 00:21:38.775: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-snapshotter Mar 22 00:21:38.787: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7985 Mar 22 00:21:38.794: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7985 Mar 22 00:21:38.805: INFO: deleting *v1.Role: csi-mock-volumes-7985-6634/external-snapshotter-leaderelection-csi-mock-volumes-7985 Mar 22 00:21:38.816: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-6634/external-snapshotter-leaderelection Mar 22 00:21:38.824: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-6634/csi-mock Mar 22 00:21:38.854: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7985 Mar 22 00:21:38.906: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7985 Mar 22 00:21:38.915: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7985 Mar 22 00:21:38.920: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7985 Mar 22 00:21:38.951: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7985 Mar 22 00:21:38.996: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7985 Mar 22 00:21:39.035: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7985 Mar 22 00:21:39.107: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7985-6634/csi-mockplugin Mar 22 00:21:39.314: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7985 Mar 22 00:21:39.376: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7985-6634/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7985-6634 STEP: Waiting for namespaces [csi-mock-volumes-7985-6634] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:31.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [73.090 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 22 00:21:31.902: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-7985 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc003518460>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":133,"completed":46,"skipped":2417,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:31.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:22:31.643: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:31.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3390" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.215 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:31.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 22 00:22:31.759: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 22 00:22:31.770: INFO: Waiting up to 30s for PersistentVolume hostpath-4zm4m to have phase Available Mar 22 00:22:31.775: INFO: PersistentVolume hostpath-4zm4m found but phase is Pending instead of Available. Mar 22 00:22:32.781: INFO: PersistentVolume hostpath-4zm4m found and phase=Available (1.010741213s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Mar 22 00:22:32.789: INFO: Waiting up to 3m0s for PersistentVolume hostpath-4zm4m to get deleted Mar 22 00:22:32.805: INFO: PersistentVolume hostpath-4zm4m found and phase=Available (16.052104ms) Mar 22 00:22:34.811: INFO: PersistentVolume hostpath-4zm4m was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:34.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-7018" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 22 00:22:34.822: INFO: AfterEach: Cleaning up test resources. Mar 22 00:22:34.822: INFO: pvc is nil Mar 22 00:22:34.822: INFO: Deleting PersistentVolume "hostpath-4zm4m" •{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":133,"completed":47,"skipped":2504,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:34.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 22 00:23:04.967: INFO: Deleting pod "pv-6747"/"pod-ephm-test-projected-hhf8" Mar 22 00:23:04.967: INFO: Deleting pod "pod-ephm-test-projected-hhf8" in namespace "pv-6747" Mar 22 00:23:04.975: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-hhf8" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:15.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6747" for this suite. • [SLOW TEST:40.360 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":133,"completed":48,"skipped":2570,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:15.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 22 00:23:15.275: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 22 00:23:15.312: INFO: Waiting up to 30s for PersistentVolume hostpath-jq7db to have phase Available Mar 22 00:23:15.315: INFO: PersistentVolume hostpath-jq7db found but phase is Pending instead of Available. Mar 22 00:23:16.319: INFO: PersistentVolume hostpath-jq7db found and phase=Available (1.007341121s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Mar 22 00:23:16.330: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zspp5] to have phase Bound Mar 22 00:23:16.360: INFO: PersistentVolumeClaim pvc-zspp5 found but phase is Pending instead of Bound. Mar 22 00:23:18.365: INFO: PersistentVolumeClaim pvc-zspp5 found and phase=Bound (2.034588621s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Mar 22 00:23:18.387: INFO: Waiting up to 3m0s for PersistentVolume hostpath-jq7db to get deleted Mar 22 00:23:18.389: INFO: PersistentVolume hostpath-jq7db found and phase=Bound (2.743948ms) Mar 22 00:23:20.393: INFO: PersistentVolume hostpath-jq7db was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:20.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-8414" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 22 00:23:20.403: INFO: AfterEach: Cleaning up test resources. Mar 22 00:23:20.403: INFO: Deleting PersistentVolumeClaim "pvc-zspp5" Mar 22 00:23:20.405: INFO: Deleting PersistentVolume "hostpath-jq7db" • [SLOW TEST:5.210 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":133,"completed":49,"skipped":2634,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:20.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:23:24.671: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-593a3819-dd46-4335-a5de-9ffa92fe1021 && mount --bind /tmp/local-volume-test-593a3819-dd46-4335-a5de-9ffa92fe1021 /tmp/local-volume-test-593a3819-dd46-4335-a5de-9ffa92fe1021] Namespace:persistent-local-volumes-test-7484 PodName:hostexec-latest-worker2-kl8ch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:23:24.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:23:24.808: INFO: Creating a PV followed by a PVC Mar 22 00:23:24.831: INFO: Waiting for PV local-pv6d7cz to bind to PVC pvc-d4s5h Mar 22 00:23:24.831: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-d4s5h] to have phase Bound Mar 22 00:23:24.869: INFO: PersistentVolumeClaim pvc-d4s5h found but phase is Pending instead of Bound. Mar 22 00:23:26.872: INFO: PersistentVolumeClaim pvc-d4s5h found and phase=Bound (2.041041706s) Mar 22 00:23:26.872: INFO: Waiting up to 3m0s for PersistentVolume local-pv6d7cz to have phase Bound Mar 22 00:23:26.874: INFO: PersistentVolume local-pv6d7cz found and phase=Bound (2.015247ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:23:26.878: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:23:26.878: INFO: Deleting PersistentVolumeClaim "pvc-d4s5h" Mar 22 00:23:26.882: INFO: Deleting PersistentVolume "local-pv6d7cz" STEP: Removing the test directory Mar 22 00:23:26.894: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-593a3819-dd46-4335-a5de-9ffa92fe1021 && rm -r /tmp/local-volume-test-593a3819-dd46-4335-a5de-9ffa92fe1021] Namespace:persistent-local-volumes-test-7484 PodName:hostexec-latest-worker2-kl8ch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:23:26.894: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:27.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7484" for this suite. S [SKIPPING] [6.686 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:27.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:27.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9081" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":133,"completed":50,"skipped":2705,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:27.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:23:31.317: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend && mount --bind /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend && ln -s /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7] Namespace:persistent-local-volumes-test-252 PodName:hostexec-latest-worker2-7nvq6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:23:31.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:23:31.461: INFO: Creating a PV followed by a PVC Mar 22 00:23:31.488: INFO: Waiting for PV local-pvjglqs to bind to PVC pvc-27pg2 Mar 22 00:23:31.488: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-27pg2] to have phase Bound Mar 22 00:23:31.528: INFO: PersistentVolumeClaim pvc-27pg2 found but phase is Pending instead of Bound. Mar 22 00:23:33.532: INFO: PersistentVolumeClaim pvc-27pg2 found but phase is Pending instead of Bound. Mar 22 00:23:35.537: INFO: PersistentVolumeClaim pvc-27pg2 found but phase is Pending instead of Bound. Mar 22 00:23:37.543: INFO: PersistentVolumeClaim pvc-27pg2 found but phase is Pending instead of Bound. Mar 22 00:23:39.547: INFO: PersistentVolumeClaim pvc-27pg2 found but phase is Pending instead of Bound. Mar 22 00:23:41.552: INFO: PersistentVolumeClaim pvc-27pg2 found and phase=Bound (10.064242088s) Mar 22 00:23:41.552: INFO: Waiting up to 3m0s for PersistentVolume local-pvjglqs to have phase Bound Mar 22 00:23:41.555: INFO: PersistentVolume local-pvjglqs found and phase=Bound (2.753116ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:23:45.633: INFO: pod "pod-a27b1e2a-7322-4096-8030-c6fc1a873862" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:23:45.633: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-252 PodName:pod-a27b1e2a-7322-4096-8030-c6fc1a873862 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:23:45.633: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:23:45.751: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:23:45.751: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-252 PodName:pod-a27b1e2a-7322-4096-8030-c6fc1a873862 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:23:45.751: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:23:45.859: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:23:45.859: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-252 PodName:pod-a27b1e2a-7322-4096-8030-c6fc1a873862 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:23:45.859: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:23:45.959: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a27b1e2a-7322-4096-8030-c6fc1a873862 in namespace persistent-local-volumes-test-252 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:23:45.965: INFO: Deleting PersistentVolumeClaim "pvc-27pg2" Mar 22 00:23:46.006: INFO: Deleting PersistentVolume "local-pvjglqs" STEP: Removing the test directory Mar 22 00:23:46.063: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7 && umount /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend && rm -r /tmp/local-volume-test-3fe29223-5c28-45a2-bf40-c06da824b7e7-backend] Namespace:persistent-local-volumes-test-252 PodName:hostexec-latest-worker2-7nvq6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:23:46.063: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:46.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-252" for this suite. • [SLOW TEST:19.083 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":51,"skipped":2783,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:46.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 22 00:23:46.443: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:46.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-3025" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.170 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should only be allowed to provision PDs in zones where nodes exist [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:46.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-3097 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 22 00:23:46.751: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-attacher Mar 22 00:23:46.754: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3097 Mar 22 00:23:46.754: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3097 Mar 22 00:23:46.775: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3097 Mar 22 00:23:46.793: INFO: creating *v1.Role: csi-mock-volumes-3097-9535/external-attacher-cfg-csi-mock-volumes-3097 Mar 22 00:23:46.810: INFO: creating *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-attacher-role-cfg Mar 22 00:23:46.851: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-provisioner Mar 22 00:23:46.854: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3097 Mar 22 00:23:46.854: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3097 Mar 22 00:23:46.864: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3097 Mar 22 00:23:46.870: INFO: creating *v1.Role: csi-mock-volumes-3097-9535/external-provisioner-cfg-csi-mock-volumes-3097 Mar 22 00:23:46.875: INFO: creating *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-provisioner-role-cfg Mar 22 00:23:46.910: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-resizer Mar 22 00:23:46.945: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3097 Mar 22 00:23:46.945: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3097 Mar 22 00:23:46.977: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3097 Mar 22 00:23:46.983: INFO: creating *v1.Role: csi-mock-volumes-3097-9535/external-resizer-cfg-csi-mock-volumes-3097 Mar 22 00:23:46.989: INFO: creating *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-resizer-role-cfg Mar 22 00:23:47.014: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-snapshotter Mar 22 00:23:47.031: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3097 Mar 22 00:23:47.031: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3097 Mar 22 00:23:47.037: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3097 Mar 22 00:23:47.050: INFO: creating *v1.Role: csi-mock-volumes-3097-9535/external-snapshotter-leaderelection-csi-mock-volumes-3097 Mar 22 00:23:47.062: INFO: creating *v1.RoleBinding: csi-mock-volumes-3097-9535/external-snapshotter-leaderelection Mar 22 00:23:47.074: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-mock Mar 22 00:23:47.103: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3097 Mar 22 00:23:47.115: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3097 Mar 22 00:23:47.137: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3097 Mar 22 00:23:47.173: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3097 Mar 22 00:23:47.235: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3097 Mar 22 00:23:47.239: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3097 Mar 22 00:23:47.247: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3097 Mar 22 00:23:47.291: INFO: creating *v1.StatefulSet: csi-mock-volumes-3097-9535/csi-mockplugin Mar 22 00:23:47.309: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3097 Mar 22 00:23:47.326: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3097" Mar 22 00:23:47.378: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3097 to register on node latest-worker I0322 00:23:57.150842 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3097","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:23:57.258914 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0322 00:23:57.261649 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3097","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:23:57.263908 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0322 00:23:57.316306 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0322 00:23:57.363665 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3097","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Mar 22 00:24:04.090: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0322 00:24:04.142957 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0322 00:24:05.142769 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0322 00:24:06.417757 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:24:06.420: INFO: >>> kubeConfig: /root/.kube/config I0322 00:24:06.555271 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715","storage.kubernetes.io/csiProvisionerIdentity":"1616372637318-8081-csi-mock-csi-mock-volumes-3097"}},"Response":{},"Error":"","FullError":null} I0322 00:24:06.562856 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:24:06.565: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:24:06.675: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:24:06.779: INFO: >>> kubeConfig: /root/.kube/config I0322 00:24:06.872720 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/globalmount","target_path":"/var/lib/kubelet/pods/e069a0dc-9e33-4889-8684-7998eb4d0d26/volumes/kubernetes.io~csi/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715","storage.kubernetes.io/csiProvisionerIdentity":"1616372637318-8081-csi-mock-csi-mock-volumes-3097"}},"Response":{},"Error":"","FullError":null} Mar 22 00:24:12.141: INFO: Deleting pod "pvc-volume-tester-cvhjh" in namespace "csi-mock-volumes-3097" Mar 22 00:24:12.178: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cvhjh" to be fully deleted I0322 00:24:12.250916 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0322 00:24:12.254248 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/e069a0dc-9e33-4889-8684-7998eb4d0d26/volumes/kubernetes.io~csi/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Mar 22 00:24:14.238: INFO: >>> kubeConfig: /root/.kube/config I0322 00:24:15.118606 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e069a0dc-9e33-4889-8684-7998eb4d0d26/volumes/kubernetes.io~csi/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/mount"},"Response":{},"Error":"","FullError":null} I0322 00:24:15.145222 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0322 00:24:15.148106 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715/globalmount"},"Response":{},"Error":"","FullError":null} I0322 00:24:46.274546 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 22 00:24:47.227: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995136", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fa120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fa138)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000a3b2b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000a3b2f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.227: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995139", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004605110), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004605128)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004605140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004605158)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000ff5a40), VolumeMode:(*v1.PersistentVolumeMode)(0xc000ff5a50), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.227: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995140", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbe78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbe90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbea8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbec0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbed8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbef0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0040b5050), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040b5060), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.227: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995143", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbf08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbf20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbf38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbf50)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbf68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbf80)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0040b5090), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040b50a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.227: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995149", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbfb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fbfc8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fbfe0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005324000)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005324018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005324030)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0040b50d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040b50e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.228: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995154", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6288), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a62a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a62b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a62d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a62e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6300)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715", StorageClassName:(*string)(0xc000a7e660), VolumeMode:(*v1.PersistentVolumeMode)(0xc000a7e6a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.228: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995156", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6348)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6378)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6390), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a63a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715", StorageClassName:(*string)(0xc000a7e730), VolumeMode:(*v1.PersistentVolumeMode)(0xc000a7e740), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.228: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995345", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc0009a63d8), DeletionGracePeriodSeconds:(*int64)(0xc0024c7b78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a63f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6408)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6438)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0009a6450), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0009a6468)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715", StorageClassName:(*string)(0xc000a7e820), VolumeMode:(*v1.PersistentVolumeMode)(0xc000a7e860), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:24:47.228: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h4kr6", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3097", SelfLink:"", UID:"e5bac2a2-22fd-49a3-be5d-146c35e42715", ResourceVersion:"6995346", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969444, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc005324270), DeletionGracePeriodSeconds:(*int64)(0xc003a6ce98), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3097", "volume.kubernetes.io/selected-node":"latest-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005324288), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0053242a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0053242b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0053242d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0053242e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005324300)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-e5bac2a2-22fd-49a3-be5d-146c35e42715", StorageClassName:(*string)(0xc0040b54f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040b5500), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-cvhjh Mar 22 00:24:47.228: INFO: Deleting pod "pvc-volume-tester-cvhjh" in namespace "csi-mock-volumes-3097" STEP: Deleting claim pvc-h4kr6 STEP: Deleting storageclass csi-mock-volumes-3097-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3097 STEP: Waiting for namespaces [csi-mock-volumes-3097] to vanish STEP: uninstalling csi mock driver Mar 22 00:24:53.267: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-attacher Mar 22 00:24:53.273: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3097 Mar 22 00:24:53.307: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3097 Mar 22 00:24:53.311: INFO: deleting *v1.Role: csi-mock-volumes-3097-9535/external-attacher-cfg-csi-mock-volumes-3097 Mar 22 00:24:53.316: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-attacher-role-cfg Mar 22 00:24:53.320: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-provisioner Mar 22 00:24:53.383: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3097 Mar 22 00:24:53.439: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3097 Mar 22 00:24:53.447: INFO: deleting *v1.Role: csi-mock-volumes-3097-9535/external-provisioner-cfg-csi-mock-volumes-3097 Mar 22 00:24:53.453: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-provisioner-role-cfg Mar 22 00:24:53.458: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-resizer Mar 22 00:24:53.464: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3097 Mar 22 00:24:53.480: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3097 Mar 22 00:24:53.488: INFO: deleting *v1.Role: csi-mock-volumes-3097-9535/external-resizer-cfg-csi-mock-volumes-3097 Mar 22 00:24:53.495: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3097-9535/csi-resizer-role-cfg Mar 22 00:24:53.501: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-snapshotter Mar 22 00:24:53.524: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3097 Mar 22 00:24:53.562: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3097 Mar 22 00:24:53.573: INFO: deleting *v1.Role: csi-mock-volumes-3097-9535/external-snapshotter-leaderelection-csi-mock-volumes-3097 Mar 22 00:24:53.580: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3097-9535/external-snapshotter-leaderelection Mar 22 00:24:53.586: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3097-9535/csi-mock Mar 22 00:24:53.592: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3097 Mar 22 00:24:53.603: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3097 Mar 22 00:24:53.722: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3097 Mar 22 00:24:53.755: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3097 Mar 22 00:24:53.768: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3097 Mar 22 00:24:53.789: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3097 Mar 22 00:24:53.795: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3097 Mar 22 00:24:53.801: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3097-9535/csi-mockplugin Mar 22 00:24:53.808: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3097 STEP: deleting the driver namespace: csi-mock-volumes-3097-9535 STEP: Waiting for namespaces [csi-mock-volumes-3097-9535] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:25:49.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:123.417 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":133,"completed":52,"skipped":2807,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:25:49.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 22 00:25:54.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6796 PodName:hostexec-latest-worker-q5qxn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:25:54.069: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:25:54.192: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 22 00:25:54.192: INFO: exec latest-worker: stdout: "0\n" Mar 22 00:25:54.192: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 22 00:25:54.192: INFO: exec latest-worker: exit code: 0 Mar 22 00:25:54.192: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:25:54.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6796" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.333 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:25:54.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 22 00:25:54.330: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 22 00:25:54.338: INFO: Default storage class: "standard" Mar 22 00:25:54.338: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 22 00:26:04.389: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionflwlp] to have phase Bound Mar 22 00:26:04.391: INFO: PersistentVolumeClaim pvc-protectionflwlp found and phase=Bound (2.04042ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Deleting the pod that uses the PVC Mar 22 00:26:04.406: INFO: Deleting pod "pvc-tester-4k9bp" in namespace "pvc-protection-6958" Mar 22 00:26:04.411: INFO: Wait up to 5m0s for pod "pvc-tester-4k9bp" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 22 00:26:46.429: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionflwlp to be removed Mar 22 00:26:46.432: INFO: Claim "pvc-protectionflwlp" in namespace "pvc-protection-6958" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:26:46.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-6958" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:52.237 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":133,"completed":53,"skipped":2987,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:26:46.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:26:50.682: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend && mount --bind /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend && ln -s /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d] Namespace:persistent-local-volumes-test-471 PodName:hostexec-latest-worker2-9w4mh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:26:50.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:26:50.858: INFO: Creating a PV followed by a PVC Mar 22 00:26:50.986: INFO: Waiting for PV local-pvtwxm6 to bind to PVC pvc-rftcw Mar 22 00:26:50.986: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rftcw] to have phase Bound Mar 22 00:26:51.134: INFO: PersistentVolumeClaim pvc-rftcw found but phase is Pending instead of Bound. Mar 22 00:26:53.138: INFO: PersistentVolumeClaim pvc-rftcw found but phase is Pending instead of Bound. Mar 22 00:26:55.142: INFO: PersistentVolumeClaim pvc-rftcw found but phase is Pending instead of Bound. Mar 22 00:26:57.147: INFO: PersistentVolumeClaim pvc-rftcw found and phase=Bound (6.160744336s) Mar 22 00:26:57.147: INFO: Waiting up to 3m0s for PersistentVolume local-pvtwxm6 to have phase Bound Mar 22 00:26:57.150: INFO: PersistentVolume local-pvtwxm6 found and phase=Bound (2.533801ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:26:57.155: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:26:57.156: INFO: Deleting PersistentVolumeClaim "pvc-rftcw" Mar 22 00:26:57.161: INFO: Deleting PersistentVolume "local-pvtwxm6" STEP: Removing the test directory Mar 22 00:26:57.203: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d && umount /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend && rm -r /tmp/local-volume-test-c33a2b45-73d8-448f-a160-c0b45a1ba83d-backend] Namespace:persistent-local-volumes-test-471 PodName:hostexec-latest-worker2-9w4mh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:26:57.203: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:26:57.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-471" for this suite. S [SKIPPING] [10.947 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:26:57.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:26:57.489: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:26:57.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5571" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.114 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:26:57.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 22 00:26:57.598: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:26:57.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1468" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 22 00:26:57.616: INFO: AfterEach: Cleaning up test resources Mar 22 00:26:57.616: INFO: pvc is nil Mar 22 00:26:57.616: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.106 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:26:57.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:27:01.738: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-40fe5743-1313-45fb-9564-a0fce4804c4b] Namespace:persistent-local-volumes-test-4666 PodName:hostexec-latest-worker-4scdz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:27:01.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:27:01.864: INFO: Creating a PV followed by a PVC Mar 22 00:27:01.872: INFO: Waiting for PV local-pv9sx2d to bind to PVC pvc-c8rnn Mar 22 00:27:01.872: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-c8rnn] to have phase Bound Mar 22 00:27:01.921: INFO: PersistentVolumeClaim pvc-c8rnn found but phase is Pending instead of Bound. Mar 22 00:27:03.926: INFO: PersistentVolumeClaim pvc-c8rnn found but phase is Pending instead of Bound. Mar 22 00:27:05.930: INFO: PersistentVolumeClaim pvc-c8rnn found but phase is Pending instead of Bound. Mar 22 00:27:07.935: INFO: PersistentVolumeClaim pvc-c8rnn found but phase is Pending instead of Bound. Mar 22 00:27:09.940: INFO: PersistentVolumeClaim pvc-c8rnn found but phase is Pending instead of Bound. Mar 22 00:27:11.944: INFO: PersistentVolumeClaim pvc-c8rnn found and phase=Bound (10.072007317s) Mar 22 00:27:11.944: INFO: Waiting up to 3m0s for PersistentVolume local-pv9sx2d to have phase Bound Mar 22 00:27:11.948: INFO: PersistentVolume local-pv9sx2d found and phase=Bound (3.457625ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:27:16.004: INFO: pod "pod-adfa6ce1-643b-41b3-935e-e8a9f3bd96c4" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:27:16.004: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4666 PodName:pod-adfa6ce1-643b-41b3-935e-e8a9f3bd96c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:27:16.004: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:27:16.125: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:27:16.125: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4666 PodName:pod-adfa6ce1-643b-41b3-935e-e8a9f3bd96c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:27:16.125: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:27:16.221: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-adfa6ce1-643b-41b3-935e-e8a9f3bd96c4 in namespace persistent-local-volumes-test-4666 STEP: Creating pod2 STEP: Creating a pod Mar 22 00:27:20.294: INFO: pod "pod-00b21cdc-9021-42e8-9d06-d2013323464d" created on Node "latest-worker" STEP: Reading in pod2 Mar 22 00:27:20.294: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4666 PodName:pod-00b21cdc-9021-42e8-9d06-d2013323464d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:27:20.294: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:27:20.374: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-00b21cdc-9021-42e8-9d06-d2013323464d in namespace persistent-local-volumes-test-4666 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:27:20.381: INFO: Deleting PersistentVolumeClaim "pvc-c8rnn" Mar 22 00:27:20.409: INFO: Deleting PersistentVolume "local-pv9sx2d" STEP: Removing the test directory Mar 22 00:27:20.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-40fe5743-1313-45fb-9564-a0fce4804c4b] Namespace:persistent-local-volumes-test-4666 PodName:hostexec-latest-worker-4scdz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:27:20.445: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:27:20.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4666" for this suite. • [SLOW TEST:23.027 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":54,"skipped":3277,"failed":9,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:27:20.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 22 00:27:24.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4211 PodName:hostexec-latest-worker2-hkjg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:27:24.824: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:27:24.954: INFO: exec latest-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 22 00:27:24.954: INFO: exec latest-worker2: stdout: "0\n" Mar 22 00:27:24.954: INFO: exec latest-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 22 00:27:24.954: INFO: exec latest-worker2: exit code: 0 Mar 22 00:27:24.954: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:27:24.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4211" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.317 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:27:24.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8748 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:27:25.341: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-attacher Mar 22 00:27:25.349: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8748 Mar 22 00:27:25.349: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8748 Mar 22 00:27:25.354: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8748 Mar 22 00:27:25.368: INFO: creating *v1.Role: csi-mock-volumes-8748-4471/external-attacher-cfg-csi-mock-volumes-8748 Mar 22 00:27:25.379: INFO: creating *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-attacher-role-cfg Mar 22 00:27:25.442: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-provisioner Mar 22 00:27:25.457: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8748 Mar 22 00:27:25.457: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8748 Mar 22 00:27:25.475: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8748 Mar 22 00:27:25.493: INFO: creating *v1.Role: csi-mock-volumes-8748-4471/external-provisioner-cfg-csi-mock-volumes-8748 Mar 22 00:27:25.511: INFO: creating *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-provisioner-role-cfg Mar 22 00:27:25.573: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-resizer Mar 22 00:27:25.601: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8748 Mar 22 00:27:25.601: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8748 Mar 22 00:27:25.606: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8748 Mar 22 00:27:25.626: INFO: creating *v1.Role: csi-mock-volumes-8748-4471/external-resizer-cfg-csi-mock-volumes-8748 Mar 22 00:27:25.649: INFO: creating *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-resizer-role-cfg Mar 22 00:27:25.693: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-snapshotter Mar 22 00:27:25.698: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8748 Mar 22 00:27:25.698: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8748 Mar 22 00:27:25.710: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8748 Mar 22 00:27:25.739: INFO: creating *v1.Role: csi-mock-volumes-8748-4471/external-snapshotter-leaderelection-csi-mock-volumes-8748 Mar 22 00:27:25.757: INFO: creating *v1.RoleBinding: csi-mock-volumes-8748-4471/external-snapshotter-leaderelection Mar 22 00:27:25.788: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-mock Mar 22 00:27:25.843: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8748 Mar 22 00:27:25.847: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8748 Mar 22 00:27:25.864: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8748 Mar 22 00:27:25.884: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8748 Mar 22 00:27:25.894: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8748 Mar 22 00:27:25.900: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8748 Mar 22 00:27:25.914: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8748 Mar 22 00:27:25.924: INFO: creating *v1.StatefulSet: csi-mock-volumes-8748-4471/csi-mockplugin Mar 22 00:27:25.930: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8748 Mar 22 00:27:25.987: INFO: creating *v1.StatefulSet: csi-mock-volumes-8748-4471/csi-mockplugin-attacher Mar 22 00:27:26.009: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8748" Mar 22 00:27:26.059: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8748 to register on node latest-worker2 Mar 22 00:27:36.064: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-8748 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0038ff180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002dc4900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002dc4900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8748 STEP: Waiting for namespaces [csi-mock-volumes-8748] to vanish STEP: uninstalling csi mock driver Mar 22 00:27:42.073: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-attacher Mar 22 00:27:42.079: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8748 Mar 22 00:27:42.088: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8748 Mar 22 00:27:42.099: INFO: deleting *v1.Role: csi-mock-volumes-8748-4471/external-attacher-cfg-csi-mock-volumes-8748 Mar 22 00:27:42.130: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-attacher-role-cfg Mar 22 00:27:42.175: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-provisioner Mar 22 00:27:42.184: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8748 Mar 22 00:27:42.254: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8748 Mar 22 00:27:42.261: INFO: deleting *v1.Role: csi-mock-volumes-8748-4471/external-provisioner-cfg-csi-mock-volumes-8748 Mar 22 00:27:42.267: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-provisioner-role-cfg Mar 22 00:27:42.287: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-resizer Mar 22 00:27:42.298: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8748 Mar 22 00:27:42.304: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8748 Mar 22 00:27:42.315: INFO: deleting *v1.Role: csi-mock-volumes-8748-4471/external-resizer-cfg-csi-mock-volumes-8748 Mar 22 00:27:42.322: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8748-4471/csi-resizer-role-cfg Mar 22 00:27:42.338: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-snapshotter Mar 22 00:27:42.352: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8748 Mar 22 00:27:42.358: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8748 Mar 22 00:27:42.370: INFO: deleting *v1.Role: csi-mock-volumes-8748-4471/external-snapshotter-leaderelection-csi-mock-volumes-8748 Mar 22 00:27:42.375: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8748-4471/external-snapshotter-leaderelection Mar 22 00:27:42.445: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8748-4471/csi-mock Mar 22 00:27:42.466: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8748 Mar 22 00:27:42.496: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8748 Mar 22 00:27:42.588: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8748 Mar 22 00:27:42.615: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8748 Mar 22 00:27:42.639: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8748 Mar 22 00:27:42.645: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8748 Mar 22 00:27:42.651: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8748 Mar 22 00:27:42.657: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8748-4471/csi-mockplugin Mar 22 00:27:42.663: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8748 Mar 22 00:27:42.670: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8748-4471/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8748-4471 STEP: Waiting for namespaces [csi-mock-volumes-8748-4471] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:28:28.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [63.730 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 22 00:27:36.064: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-8748 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0038ff180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":133,"completed":54,"skipped":3351,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:28:28.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-7518 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:28:28.870: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-attacher Mar 22 00:28:28.874: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7518 Mar 22 00:28:28.874: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7518 Mar 22 00:28:28.881: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7518 Mar 22 00:28:28.921: INFO: creating *v1.Role: csi-mock-volumes-7518-4288/external-attacher-cfg-csi-mock-volumes-7518 Mar 22 00:28:28.939: INFO: creating *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-attacher-role-cfg Mar 22 00:28:28.959: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-provisioner Mar 22 00:28:28.995: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7518 Mar 22 00:28:28.995: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7518 Mar 22 00:28:29.000: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7518 Mar 22 00:28:29.006: INFO: creating *v1.Role: csi-mock-volumes-7518-4288/external-provisioner-cfg-csi-mock-volumes-7518 Mar 22 00:28:29.072: INFO: creating *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-provisioner-role-cfg Mar 22 00:28:29.119: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-resizer Mar 22 00:28:29.147: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7518 Mar 22 00:28:29.147: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7518 Mar 22 00:28:29.209: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7518 Mar 22 00:28:29.213: INFO: creating *v1.Role: csi-mock-volumes-7518-4288/external-resizer-cfg-csi-mock-volumes-7518 Mar 22 00:28:29.225: INFO: creating *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-resizer-role-cfg Mar 22 00:28:29.268: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-snapshotter Mar 22 00:28:29.285: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7518 Mar 22 00:28:29.285: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7518 Mar 22 00:28:29.291: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7518 Mar 22 00:28:29.353: INFO: creating *v1.Role: csi-mock-volumes-7518-4288/external-snapshotter-leaderelection-csi-mock-volumes-7518 Mar 22 00:28:29.370: INFO: creating *v1.RoleBinding: csi-mock-volumes-7518-4288/external-snapshotter-leaderelection Mar 22 00:28:29.399: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-mock Mar 22 00:28:29.405: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7518 Mar 22 00:28:29.411: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7518 Mar 22 00:28:29.424: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7518 Mar 22 00:28:29.435: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7518 Mar 22 00:28:29.448: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7518 Mar 22 00:28:29.490: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7518 Mar 22 00:28:29.493: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7518 Mar 22 00:28:29.501: INFO: creating *v1.StatefulSet: csi-mock-volumes-7518-4288/csi-mockplugin Mar 22 00:28:29.507: INFO: creating *v1.StatefulSet: csi-mock-volumes-7518-4288/csi-mockplugin-attacher Mar 22 00:28:29.525: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7518 to register on node latest-worker2 STEP: Creating pod Mar 22 00:28:39.089: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:28:39.102: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-l6gj6] to have phase Bound Mar 22 00:28:39.106: INFO: PersistentVolumeClaim pvc-l6gj6 found but phase is Pending instead of Bound. Mar 22 00:28:41.126: INFO: PersistentVolumeClaim pvc-l6gj6 found and phase=Bound (2.023616137s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-f8psj Mar 22 00:28:53.169: INFO: Deleting pod "pvc-volume-tester-f8psj" in namespace "csi-mock-volumes-7518" Mar 22 00:28:53.177: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f8psj" to be fully deleted STEP: Deleting claim pvc-l6gj6 Mar 22 00:29:25.221: INFO: Waiting up to 2m0s for PersistentVolume pvc-32136ca6-42ec-4518-a983-b656f7a0d4c7 to get deleted Mar 22 00:29:25.239: INFO: PersistentVolume pvc-32136ca6-42ec-4518-a983-b656f7a0d4c7 found and phase=Bound (18.109063ms) Mar 22 00:29:27.243: INFO: PersistentVolume pvc-32136ca6-42ec-4518-a983-b656f7a0d4c7 was removed STEP: Deleting storageclass csi-mock-volumes-7518-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7518 STEP: Waiting for namespaces [csi-mock-volumes-7518] to vanish STEP: uninstalling csi mock driver Mar 22 00:29:33.271: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-attacher Mar 22 00:29:33.276: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7518 Mar 22 00:29:33.366: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7518 Mar 22 00:29:33.372: INFO: deleting *v1.Role: csi-mock-volumes-7518-4288/external-attacher-cfg-csi-mock-volumes-7518 Mar 22 00:29:33.379: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-attacher-role-cfg Mar 22 00:29:33.403: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-provisioner Mar 22 00:29:33.427: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7518 Mar 22 00:29:33.488: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7518 Mar 22 00:29:33.504: INFO: deleting *v1.Role: csi-mock-volumes-7518-4288/external-provisioner-cfg-csi-mock-volumes-7518 Mar 22 00:29:33.526: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-provisioner-role-cfg Mar 22 00:29:33.541: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-resizer Mar 22 00:29:33.547: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7518 Mar 22 00:29:33.556: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7518 Mar 22 00:29:33.566: INFO: deleting *v1.Role: csi-mock-volumes-7518-4288/external-resizer-cfg-csi-mock-volumes-7518 Mar 22 00:29:33.570: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7518-4288/csi-resizer-role-cfg Mar 22 00:29:33.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-snapshotter Mar 22 00:29:33.620: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7518 Mar 22 00:29:33.635: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7518 Mar 22 00:29:33.644: INFO: deleting *v1.Role: csi-mock-volumes-7518-4288/external-snapshotter-leaderelection-csi-mock-volumes-7518 Mar 22 00:29:33.665: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7518-4288/external-snapshotter-leaderelection Mar 22 00:29:33.679: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7518-4288/csi-mock Mar 22 00:29:33.685: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7518 Mar 22 00:29:33.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7518 Mar 22 00:29:33.720: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7518 Mar 22 00:29:33.740: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7518 Mar 22 00:29:33.751: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7518 Mar 22 00:29:33.775: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7518 Mar 22 00:29:33.793: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7518 Mar 22 00:29:33.815: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7518-4288/csi-mockplugin Mar 22 00:29:33.829: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7518-4288/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7518-4288 STEP: Waiting for namespaces [csi-mock-volumes-7518-4288] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:121.173 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":133,"completed":55,"skipped":3405,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:29.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:30:29.961: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:29.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5300" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.132 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:30.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:30:34.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-19f9ccde-4574-423a-bf7e-538f9f8f40a9-backend && ln -s /tmp/local-volume-test-19f9ccde-4574-423a-bf7e-538f9f8f40a9-backend /tmp/local-volume-test-19f9ccde-4574-423a-bf7e-538f9f8f40a9] Namespace:persistent-local-volumes-test-7484 PodName:hostexec-latest-worker-94tm8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:30:34.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:30:34.322: INFO: Creating a PV followed by a PVC Mar 22 00:30:34.337: INFO: Waiting for PV local-pv62jxj to bind to PVC pvc-62m4d Mar 22 00:30:34.337: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-62m4d] to have phase Bound Mar 22 00:30:34.361: INFO: PersistentVolumeClaim pvc-62m4d found but phase is Pending instead of Bound. Mar 22 00:30:36.366: INFO: PersistentVolumeClaim pvc-62m4d found but phase is Pending instead of Bound. Mar 22 00:30:38.370: INFO: PersistentVolumeClaim pvc-62m4d found but phase is Pending instead of Bound. Mar 22 00:30:40.376: INFO: PersistentVolumeClaim pvc-62m4d found but phase is Pending instead of Bound. Mar 22 00:30:42.381: INFO: PersistentVolumeClaim pvc-62m4d found and phase=Bound (8.044396061s) Mar 22 00:30:42.381: INFO: Waiting up to 3m0s for PersistentVolume local-pv62jxj to have phase Bound Mar 22 00:30:42.385: INFO: PersistentVolume local-pv62jxj found and phase=Bound (3.353467ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:30:42.391: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:30:42.392: INFO: Deleting PersistentVolumeClaim "pvc-62m4d" Mar 22 00:30:42.397: INFO: Deleting PersistentVolume "local-pv62jxj" STEP: Removing the test directory Mar 22 00:30:42.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-19f9ccde-4574-423a-bf7e-538f9f8f40a9 && rm -r /tmp/local-volume-test-19f9ccde-4574-423a-bf7e-538f9f8f40a9-backend] Namespace:persistent-local-volumes-test-7484 PodName:hostexec-latest-worker-94tm8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:30:42.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:42.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7484" for this suite. S [SKIPPING] [12.608 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:42.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-4498 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:30:42.881: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-attacher Mar 22 00:30:42.885: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4498 Mar 22 00:30:42.885: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4498 Mar 22 00:30:42.896: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4498 Mar 22 00:30:42.944: INFO: creating *v1.Role: csi-mock-volumes-4498-9469/external-attacher-cfg-csi-mock-volumes-4498 Mar 22 00:30:42.968: INFO: creating *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-attacher-role-cfg Mar 22 00:30:42.986: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-provisioner Mar 22 00:30:43.010: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4498 Mar 22 00:30:43.010: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4498 Mar 22 00:30:43.015: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4498 Mar 22 00:30:43.086: INFO: creating *v1.Role: csi-mock-volumes-4498-9469/external-provisioner-cfg-csi-mock-volumes-4498 Mar 22 00:30:43.090: INFO: creating *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-provisioner-role-cfg Mar 22 00:30:43.106: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-resizer Mar 22 00:30:43.148: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4498 Mar 22 00:30:43.148: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4498 Mar 22 00:30:43.165: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4498 Mar 22 00:30:43.171: INFO: creating *v1.Role: csi-mock-volumes-4498-9469/external-resizer-cfg-csi-mock-volumes-4498 Mar 22 00:30:43.229: INFO: creating *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-resizer-role-cfg Mar 22 00:30:43.233: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-snapshotter Mar 22 00:30:43.249: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4498 Mar 22 00:30:43.249: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4498 Mar 22 00:30:43.261: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4498 Mar 22 00:30:43.267: INFO: creating *v1.Role: csi-mock-volumes-4498-9469/external-snapshotter-leaderelection-csi-mock-volumes-4498 Mar 22 00:30:43.273: INFO: creating *v1.RoleBinding: csi-mock-volumes-4498-9469/external-snapshotter-leaderelection Mar 22 00:30:43.291: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-mock Mar 22 00:30:43.303: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4498 Mar 22 00:30:43.309: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4498 Mar 22 00:30:43.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4498 Mar 22 00:30:43.418: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4498 Mar 22 00:30:43.427: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4498 Mar 22 00:30:43.433: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4498 Mar 22 00:30:43.439: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4498 Mar 22 00:30:43.445: INFO: creating *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin Mar 22 00:30:43.466: INFO: creating *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin-attacher Mar 22 00:30:43.496: INFO: creating *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin-resizer Mar 22 00:30:43.547: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4498 to register on node latest-worker2 STEP: Creating pod Mar 22 00:31:00.313: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:31:00.363: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-d42p5] to have phase Bound Mar 22 00:31:00.374: INFO: PersistentVolumeClaim pvc-d42p5 found but phase is Pending instead of Bound. Mar 22 00:31:02.380: INFO: PersistentVolumeClaim pvc-d42p5 found and phase=Bound (2.016499954s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-7tcql Mar 22 00:32:24.477: INFO: Deleting pod "pvc-volume-tester-7tcql" in namespace "csi-mock-volumes-4498" Mar 22 00:32:24.482: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7tcql" to be fully deleted STEP: Deleting claim pvc-d42p5 Mar 22 00:33:26.520: INFO: Waiting up to 2m0s for PersistentVolume pvc-6b2a6317-442e-4760-9059-75196954051f to get deleted Mar 22 00:33:26.573: INFO: PersistentVolume pvc-6b2a6317-442e-4760-9059-75196954051f found and phase=Bound (52.645823ms) Mar 22 00:33:28.578: INFO: PersistentVolume pvc-6b2a6317-442e-4760-9059-75196954051f was removed STEP: Deleting storageclass csi-mock-volumes-4498-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4498 STEP: Waiting for namespaces [csi-mock-volumes-4498] to vanish STEP: uninstalling csi mock driver Mar 22 00:33:34.601: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-attacher Mar 22 00:33:34.606: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4498 Mar 22 00:33:34.625: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4498 Mar 22 00:33:34.657: INFO: deleting *v1.Role: csi-mock-volumes-4498-9469/external-attacher-cfg-csi-mock-volumes-4498 Mar 22 00:33:34.686: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-attacher-role-cfg Mar 22 00:33:34.722: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-provisioner Mar 22 00:33:34.727: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4498 Mar 22 00:33:34.734: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4498 Mar 22 00:33:34.740: INFO: deleting *v1.Role: csi-mock-volumes-4498-9469/external-provisioner-cfg-csi-mock-volumes-4498 Mar 22 00:33:34.745: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-provisioner-role-cfg Mar 22 00:33:34.752: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-resizer Mar 22 00:33:34.777: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4498 Mar 22 00:33:34.788: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4498 Mar 22 00:33:34.821: INFO: deleting *v1.Role: csi-mock-volumes-4498-9469/external-resizer-cfg-csi-mock-volumes-4498 Mar 22 00:33:34.836: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4498-9469/csi-resizer-role-cfg Mar 22 00:33:34.849: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-snapshotter Mar 22 00:33:34.853: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4498 Mar 22 00:33:34.859: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4498 Mar 22 00:33:34.916: INFO: deleting *v1.Role: csi-mock-volumes-4498-9469/external-snapshotter-leaderelection-csi-mock-volumes-4498 Mar 22 00:33:34.944: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4498-9469/external-snapshotter-leaderelection Mar 22 00:33:34.950: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4498-9469/csi-mock Mar 22 00:33:34.974: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4498 Mar 22 00:33:35.064: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4498 Mar 22 00:33:35.083: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4498 Mar 22 00:33:35.106: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4498 Mar 22 00:33:35.130: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4498 Mar 22 00:33:35.136: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4498 Mar 22 00:33:35.161: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4498 Mar 22 00:33:35.185: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin Mar 22 00:33:35.195: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin-attacher Mar 22 00:33:35.202: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4498-9469/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-4498-9469 STEP: Waiting for namespaces [csi-mock-volumes-4498-9469] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:27.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:224.643 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":133,"completed":56,"skipped":3529,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:27.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8533 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:34:27.590: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-attacher Mar 22 00:34:27.594: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8533 Mar 22 00:34:27.594: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8533 Mar 22 00:34:27.688: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8533 Mar 22 00:34:27.691: INFO: creating *v1.Role: csi-mock-volumes-8533-6813/external-attacher-cfg-csi-mock-volumes-8533 Mar 22 00:34:27.698: INFO: creating *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-attacher-role-cfg Mar 22 00:34:27.721: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-provisioner Mar 22 00:34:27.740: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8533 Mar 22 00:34:27.740: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8533 Mar 22 00:34:27.746: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8533 Mar 22 00:34:27.771: INFO: creating *v1.Role: csi-mock-volumes-8533-6813/external-provisioner-cfg-csi-mock-volumes-8533 Mar 22 00:34:27.807: INFO: creating *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-provisioner-role-cfg Mar 22 00:34:27.824: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-resizer Mar 22 00:34:27.842: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8533 Mar 22 00:34:27.842: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8533 Mar 22 00:34:27.866: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8533 Mar 22 00:34:27.872: INFO: creating *v1.Role: csi-mock-volumes-8533-6813/external-resizer-cfg-csi-mock-volumes-8533 Mar 22 00:34:27.877: INFO: creating *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-resizer-role-cfg Mar 22 00:34:27.897: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-snapshotter Mar 22 00:34:27.975: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8533 Mar 22 00:34:27.975: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8533 Mar 22 00:34:27.985: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8533 Mar 22 00:34:27.991: INFO: creating *v1.Role: csi-mock-volumes-8533-6813/external-snapshotter-leaderelection-csi-mock-volumes-8533 Mar 22 00:34:27.997: INFO: creating *v1.RoleBinding: csi-mock-volumes-8533-6813/external-snapshotter-leaderelection Mar 22 00:34:28.015: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-mock Mar 22 00:34:28.028: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8533 Mar 22 00:34:28.033: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8533 Mar 22 00:34:28.052: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8533 Mar 22 00:34:28.069: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8533 Mar 22 00:34:28.112: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8533 Mar 22 00:34:28.122: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8533 Mar 22 00:34:28.128: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8533 Mar 22 00:34:28.135: INFO: creating *v1.StatefulSet: csi-mock-volumes-8533-6813/csi-mockplugin Mar 22 00:34:28.141: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8533 Mar 22 00:34:28.161: INFO: creating *v1.StatefulSet: csi-mock-volumes-8533-6813/csi-mockplugin-attacher Mar 22 00:34:28.191: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8533" Mar 22 00:34:28.207: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8533 to register on node latest-worker2 STEP: Creating pod Mar 22 00:34:42.869: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 22 00:34:55.196: INFO: Deleting pod "pvc-volume-tester-lnnz6" in namespace "csi-mock-volumes-8533" Mar 22 00:34:55.202: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lnnz6" to be fully deleted STEP: Deleting pod pvc-volume-tester-lnnz6 Mar 22 00:35:05.220: INFO: Deleting pod "pvc-volume-tester-lnnz6" in namespace "csi-mock-volumes-8533" STEP: Deleting claim pvc-tdxk6 Mar 22 00:35:05.231: INFO: Waiting up to 2m0s for PersistentVolume pvc-33037bd8-33f7-4888-9f6a-e7471665f2d8 to get deleted Mar 22 00:35:05.238: INFO: PersistentVolume pvc-33037bd8-33f7-4888-9f6a-e7471665f2d8 found and phase=Bound (7.441966ms) Mar 22 00:35:07.243: INFO: PersistentVolume pvc-33037bd8-33f7-4888-9f6a-e7471665f2d8 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-8533 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8533 STEP: Waiting for namespaces [csi-mock-volumes-8533] to vanish STEP: uninstalling csi mock driver Mar 22 00:35:13.283: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-attacher Mar 22 00:35:13.289: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8533 Mar 22 00:35:13.371: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8533 Mar 22 00:35:13.382: INFO: deleting *v1.Role: csi-mock-volumes-8533-6813/external-attacher-cfg-csi-mock-volumes-8533 Mar 22 00:35:13.386: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-attacher-role-cfg Mar 22 00:35:13.390: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-provisioner Mar 22 00:35:13.395: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8533 Mar 22 00:35:13.406: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8533 Mar 22 00:35:13.414: INFO: deleting *v1.Role: csi-mock-volumes-8533-6813/external-provisioner-cfg-csi-mock-volumes-8533 Mar 22 00:35:13.420: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-provisioner-role-cfg Mar 22 00:35:13.425: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-resizer Mar 22 00:35:13.432: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8533 Mar 22 00:35:13.466: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8533 Mar 22 00:35:13.492: INFO: deleting *v1.Role: csi-mock-volumes-8533-6813/external-resizer-cfg-csi-mock-volumes-8533 Mar 22 00:35:13.504: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8533-6813/csi-resizer-role-cfg Mar 22 00:35:13.511: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-snapshotter Mar 22 00:35:13.515: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8533 Mar 22 00:35:13.522: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8533 Mar 22 00:35:13.533: INFO: deleting *v1.Role: csi-mock-volumes-8533-6813/external-snapshotter-leaderelection-csi-mock-volumes-8533 Mar 22 00:35:13.539: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8533-6813/external-snapshotter-leaderelection Mar 22 00:35:13.545: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8533-6813/csi-mock Mar 22 00:35:13.565: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8533 Mar 22 00:35:13.576: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8533 Mar 22 00:35:13.647: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8533 Mar 22 00:35:13.658: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8533 Mar 22 00:35:13.678: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8533 Mar 22 00:35:13.683: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8533 Mar 22 00:35:13.689: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8533 Mar 22 00:35:13.694: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8533-6813/csi-mockplugin Mar 22 00:35:13.745: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8533 Mar 22 00:35:13.755: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8533-6813/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8533-6813 STEP: Waiting for namespaces [csi-mock-volumes-8533-6813] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:35:41.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:74.523 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":133,"completed":57,"skipped":3569,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:35:41.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-732 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:35:41.986: INFO: creating *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-attacher Mar 22 00:35:41.990: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-732 Mar 22 00:35:41.990: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-732 Mar 22 00:35:41.994: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-732 Mar 22 00:35:42.000: INFO: creating *v1.Role: csi-mock-volumes-732-3320/external-attacher-cfg-csi-mock-volumes-732 Mar 22 00:35:42.013: INFO: creating *v1.RoleBinding: csi-mock-volumes-732-3320/csi-attacher-role-cfg Mar 22 00:35:42.084: INFO: creating *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-provisioner Mar 22 00:35:42.087: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-732 Mar 22 00:35:42.087: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-732 Mar 22 00:35:42.095: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-732 Mar 22 00:35:42.101: INFO: creating *v1.Role: csi-mock-volumes-732-3320/external-provisioner-cfg-csi-mock-volumes-732 Mar 22 00:35:42.107: INFO: creating *v1.RoleBinding: csi-mock-volumes-732-3320/csi-provisioner-role-cfg Mar 22 00:35:42.138: INFO: creating *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-resizer Mar 22 00:35:42.155: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-732 Mar 22 00:35:42.155: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-732 Mar 22 00:35:42.180: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-732 Mar 22 00:35:42.250: INFO: creating *v1.Role: csi-mock-volumes-732-3320/external-resizer-cfg-csi-mock-volumes-732 Mar 22 00:35:42.276: INFO: creating *v1.RoleBinding: csi-mock-volumes-732-3320/csi-resizer-role-cfg Mar 22 00:35:42.401: INFO: creating *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-snapshotter Mar 22 00:35:42.405: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-732 Mar 22 00:35:42.405: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-732 Mar 22 00:35:42.419: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-732 Mar 22 00:35:42.445: INFO: creating *v1.Role: csi-mock-volumes-732-3320/external-snapshotter-leaderelection-csi-mock-volumes-732 Mar 22 00:35:42.462: INFO: creating *v1.RoleBinding: csi-mock-volumes-732-3320/external-snapshotter-leaderelection Mar 22 00:35:42.490: INFO: creating *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-mock Mar 22 00:35:42.521: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-732 Mar 22 00:35:42.524: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-732 Mar 22 00:35:42.527: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-732 Mar 22 00:35:42.533: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-732 Mar 22 00:35:42.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-732 Mar 22 00:35:42.562: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-732 Mar 22 00:35:42.598: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-732 Mar 22 00:35:42.611: INFO: creating *v1.StatefulSet: csi-mock-volumes-732-3320/csi-mockplugin Mar 22 00:35:42.617: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-732 Mar 22 00:35:42.665: INFO: creating *v1.StatefulSet: csi-mock-volumes-732-3320/csi-mockplugin-attacher Mar 22 00:35:42.684: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-732" Mar 22 00:35:42.715: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-732 to register on node latest-worker STEP: Creating pod Mar 22 00:35:52.549: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:35:52.565: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-rstqr] to have phase Bound Mar 22 00:35:52.615: INFO: PersistentVolumeClaim pvc-rstqr found but phase is Pending instead of Bound. Mar 22 00:35:54.621: INFO: PersistentVolumeClaim pvc-rstqr found and phase=Bound (2.055958179s) STEP: Deleting the previously created pod Mar 22 00:36:06.707: INFO: Deleting pod "pvc-volume-tester-m5smg" in namespace "csi-mock-volumes-732" Mar 22 00:36:06.714: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m5smg" to be fully deleted STEP: Checking CSI driver logs Mar 22 00:36:56.793: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2088352b-d584-4633-b14f-20bdb6f066b0/volumes/kubernetes.io~csi/pvc-cf6b2862-e2d8-483d-b6dd-6f472b3ee03f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-m5smg Mar 22 00:36:56.793: INFO: Deleting pod "pvc-volume-tester-m5smg" in namespace "csi-mock-volumes-732" STEP: Deleting claim pvc-rstqr Mar 22 00:36:56.823: INFO: Waiting up to 2m0s for PersistentVolume pvc-cf6b2862-e2d8-483d-b6dd-6f472b3ee03f to get deleted Mar 22 00:36:56.834: INFO: PersistentVolume pvc-cf6b2862-e2d8-483d-b6dd-6f472b3ee03f found and phase=Bound (10.944354ms) Mar 22 00:36:58.837: INFO: PersistentVolume pvc-cf6b2862-e2d8-483d-b6dd-6f472b3ee03f was removed STEP: Deleting storageclass csi-mock-volumes-732-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-732 STEP: Waiting for namespaces [csi-mock-volumes-732] to vanish STEP: uninstalling csi mock driver Mar 22 00:37:04.852: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-attacher Mar 22 00:37:04.876: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-732 Mar 22 00:37:04.894: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-732 Mar 22 00:37:04.915: INFO: deleting *v1.Role: csi-mock-volumes-732-3320/external-attacher-cfg-csi-mock-volumes-732 Mar 22 00:37:04.933: INFO: deleting *v1.RoleBinding: csi-mock-volumes-732-3320/csi-attacher-role-cfg Mar 22 00:37:04.938: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-provisioner Mar 22 00:37:04.944: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-732 Mar 22 00:37:04.954: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-732 Mar 22 00:37:04.962: INFO: deleting *v1.Role: csi-mock-volumes-732-3320/external-provisioner-cfg-csi-mock-volumes-732 Mar 22 00:37:04.968: INFO: deleting *v1.RoleBinding: csi-mock-volumes-732-3320/csi-provisioner-role-cfg Mar 22 00:37:04.974: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-resizer Mar 22 00:37:04.997: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-732 Mar 22 00:37:05.017: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-732 Mar 22 00:37:05.082: INFO: deleting *v1.Role: csi-mock-volumes-732-3320/external-resizer-cfg-csi-mock-volumes-732 Mar 22 00:37:05.088: INFO: deleting *v1.RoleBinding: csi-mock-volumes-732-3320/csi-resizer-role-cfg Mar 22 00:37:05.094: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-snapshotter Mar 22 00:37:05.117: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-732 Mar 22 00:37:05.147: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-732 Mar 22 00:37:05.179: INFO: deleting *v1.Role: csi-mock-volumes-732-3320/external-snapshotter-leaderelection-csi-mock-volumes-732 Mar 22 00:37:05.271: INFO: deleting *v1.RoleBinding: csi-mock-volumes-732-3320/external-snapshotter-leaderelection Mar 22 00:37:05.281: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-732-3320/csi-mock Mar 22 00:37:05.305: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-732 Mar 22 00:37:05.363: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-732 Mar 22 00:37:05.402: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-732 Mar 22 00:37:05.407: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-732 Mar 22 00:37:05.418: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-732 Mar 22 00:37:05.437: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-732 Mar 22 00:37:05.447: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-732 Mar 22 00:37:05.475: INFO: deleting *v1.StatefulSet: csi-mock-volumes-732-3320/csi-mockplugin Mar 22 00:37:05.502: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-732 Mar 22 00:37:05.548: INFO: deleting *v1.StatefulSet: csi-mock-volumes-732-3320/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-732-3320 STEP: Waiting for namespaces [csi-mock-volumes-732-3320] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:37:57.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:135.806 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":133,"completed":58,"skipped":3649,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:37:57.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 22 00:37:57.649: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 22 00:37:57.662: INFO: Default storage class: "standard" Mar 22 00:37:57.662: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 22 00:38:07.711: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectiongbtk5] to have phase Bound Mar 22 00:38:07.714: INFO: PersistentVolumeClaim pvc-protectiongbtk5 found and phase=Bound (2.337971ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Creating second Pod whose scheduling fails because it uses a PVC that is being deleted Mar 22 00:38:07.810: INFO: Waiting up to 5m0s for pod "pvc-tester-bns64" in namespace "pvc-protection-1518" to be "Unschedulable" Mar 22 00:38:07.819: INFO: Pod "pvc-tester-bns64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819352ms Mar 22 00:38:09.823: INFO: Pod "pvc-tester-bns64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013204682s Mar 22 00:38:09.823: INFO: Pod "pvc-tester-bns64" satisfied condition "Unschedulable" STEP: Deleting the second pod that uses the PVC that is being deleted Mar 22 00:38:09.827: INFO: Deleting pod "pvc-tester-bns64" in namespace "pvc-protection-1518" Mar 22 00:38:09.892: INFO: Wait up to 5m0s for pod "pvc-tester-bns64" to be fully deleted STEP: Checking again that the PVC status is Terminating STEP: Deleting the first pod that uses the PVC Mar 22 00:38:09.899: INFO: Deleting pod "pvc-tester-6kn4p" in namespace "pvc-protection-1518" Mar 22 00:38:09.903: INFO: Wait up to 5m0s for pod "pvc-tester-6kn4p" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 22 00:38:55.945: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectiongbtk5 to be removed Mar 22 00:38:55.948: INFO: Claim "pvc-protectiongbtk5" in namespace "pvc-protection-1518" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:38:55.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-1518" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:58.374 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":133,"completed":59,"skipped":3657,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:38:55.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Mar 22 00:38:56.051: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:38:56.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-6375" for this suite. S [SKIPPING] [0.121 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:827 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:38:56.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 22 00:38:56.134: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:38:56.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4139" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 22 00:38:56.143: INFO: AfterEach: Cleaning up test resources Mar 22 00:38:56.143: INFO: pvc is nil Mar 22 00:38:56.143: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.060 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:38:56.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56" Mar 22 00:39:00.548: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56" "/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56"] Namespace:persistent-local-volumes-test-9431 PodName:hostexec-latest-worker2-ztwkb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:39:00.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:39:00.854: INFO: Creating a PV followed by a PVC Mar 22 00:39:00.885: INFO: Waiting for PV local-pvqpsqm to bind to PVC pvc-6nq29 Mar 22 00:39:00.885: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-6nq29] to have phase Bound Mar 22 00:39:00.910: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:02.913: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:04.918: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:06.922: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:08.925: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:10.928: INFO: PersistentVolumeClaim pvc-6nq29 found but phase is Pending instead of Bound. Mar 22 00:39:12.962: INFO: PersistentVolumeClaim pvc-6nq29 found and phase=Bound (12.076421303s) Mar 22 00:39:12.962: INFO: Waiting up to 3m0s for PersistentVolume local-pvqpsqm to have phase Bound Mar 22 00:39:12.966: INFO: PersistentVolume local-pvqpsqm found and phase=Bound (4.662618ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:39:16.995: INFO: pod "pod-8065663a-f240-4681-bb42-2b791a986b39" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:39:16.995: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9431 PodName:pod-8065663a-f240-4681-bb42-2b791a986b39 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:39:16.995: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:39:17.104: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:39:17.104: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9431 PodName:pod-8065663a-f240-4681-bb42-2b791a986b39 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:39:17.104: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:39:17.192: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-8065663a-f240-4681-bb42-2b791a986b39 in namespace persistent-local-volumes-test-9431 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:39:17.197: INFO: Deleting PersistentVolumeClaim "pvc-6nq29" Mar 22 00:39:17.203: INFO: Deleting PersistentVolume "local-pvqpsqm" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56" Mar 22 00:39:17.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56"] Namespace:persistent-local-volumes-test-9431 PodName:hostexec-latest-worker2-ztwkb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:39:17.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 22 00:39:17.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-13da734a-8d60-439d-872f-34b8ffc4ac56] Namespace:persistent-local-volumes-test-9431 PodName:hostexec-latest-worker2-ztwkb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:39:17.456: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:39:17.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9431" for this suite. • [SLOW TEST:21.600 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":60,"skipped":3872,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:39:17.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-8717 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:39:17.984: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-attacher Mar 22 00:39:17.987: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8717 Mar 22 00:39:17.987: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8717 Mar 22 00:39:18.045: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8717 Mar 22 00:39:18.055: INFO: creating *v1.Role: csi-mock-volumes-8717-4183/external-attacher-cfg-csi-mock-volumes-8717 Mar 22 00:39:18.076: INFO: creating *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-attacher-role-cfg Mar 22 00:39:18.090: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-provisioner Mar 22 00:39:18.096: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8717 Mar 22 00:39:18.096: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8717 Mar 22 00:39:18.102: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8717 Mar 22 00:39:18.108: INFO: creating *v1.Role: csi-mock-volumes-8717-4183/external-provisioner-cfg-csi-mock-volumes-8717 Mar 22 00:39:18.195: INFO: creating *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-provisioner-role-cfg Mar 22 00:39:18.200: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-resizer Mar 22 00:39:18.210: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8717 Mar 22 00:39:18.210: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8717 Mar 22 00:39:18.240: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8717 Mar 22 00:39:18.246: INFO: creating *v1.Role: csi-mock-volumes-8717-4183/external-resizer-cfg-csi-mock-volumes-8717 Mar 22 00:39:18.252: INFO: creating *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-resizer-role-cfg Mar 22 00:39:18.274: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-snapshotter Mar 22 00:39:18.288: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8717 Mar 22 00:39:18.288: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8717 Mar 22 00:39:18.326: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8717 Mar 22 00:39:18.348: INFO: creating *v1.Role: csi-mock-volumes-8717-4183/external-snapshotter-leaderelection-csi-mock-volumes-8717 Mar 22 00:39:18.407: INFO: creating *v1.RoleBinding: csi-mock-volumes-8717-4183/external-snapshotter-leaderelection Mar 22 00:39:18.519: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-mock Mar 22 00:39:18.524: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8717 Mar 22 00:39:18.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8717 Mar 22 00:39:18.601: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8717 Mar 22 00:39:18.662: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8717 Mar 22 00:39:18.712: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8717 Mar 22 00:39:18.721: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8717 Mar 22 00:39:18.725: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8717 Mar 22 00:39:18.800: INFO: creating *v1.StatefulSet: csi-mock-volumes-8717-4183/csi-mockplugin Mar 22 00:39:18.807: INFO: creating *v1.StatefulSet: csi-mock-volumes-8717-4183/csi-mockplugin-attacher Mar 22 00:39:18.832: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8717 to register on node latest-worker STEP: Creating pod Mar 22 00:39:28.601: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:39:28.783: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-4qbts] to have phase Bound Mar 22 00:39:29.189: INFO: PersistentVolumeClaim pvc-4qbts found but phase is Pending instead of Bound. Mar 22 00:39:31.193: INFO: PersistentVolumeClaim pvc-4qbts found and phase=Bound (2.410363243s) STEP: Deleting the previously created pod Mar 22 00:39:51.240: INFO: Deleting pod "pvc-volume-tester-4n9pz" in namespace "csi-mock-volumes-8717" Mar 22 00:39:51.292: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4n9pz" to be fully deleted STEP: Checking CSI driver logs Mar 22 00:40:05.965: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2a9e664d-bf9c-4c43-82d0-3d5ea52fee61/volumes/kubernetes.io~csi/pvc-c24d4856-ea9c-4d10-b7f8-d40ad30110e4/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-4n9pz Mar 22 00:40:05.965: INFO: Deleting pod "pvc-volume-tester-4n9pz" in namespace "csi-mock-volumes-8717" STEP: Deleting claim pvc-4qbts Mar 22 00:40:05.976: INFO: Waiting up to 2m0s for PersistentVolume pvc-c24d4856-ea9c-4d10-b7f8-d40ad30110e4 to get deleted Mar 22 00:40:05.983: INFO: PersistentVolume pvc-c24d4856-ea9c-4d10-b7f8-d40ad30110e4 found and phase=Bound (6.631536ms) Mar 22 00:40:07.986: INFO: PersistentVolume pvc-c24d4856-ea9c-4d10-b7f8-d40ad30110e4 was removed STEP: Deleting storageclass csi-mock-volumes-8717-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8717 STEP: Waiting for namespaces [csi-mock-volumes-8717] to vanish STEP: uninstalling csi mock driver Mar 22 00:40:14.009: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-attacher Mar 22 00:40:14.016: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8717 Mar 22 00:40:14.033: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8717 Mar 22 00:40:14.041: INFO: deleting *v1.Role: csi-mock-volumes-8717-4183/external-attacher-cfg-csi-mock-volumes-8717 Mar 22 00:40:14.070: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-attacher-role-cfg Mar 22 00:40:14.076: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-provisioner Mar 22 00:40:14.137: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8717 Mar 22 00:40:14.168: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8717 Mar 22 00:40:14.173: INFO: deleting *v1.Role: csi-mock-volumes-8717-4183/external-provisioner-cfg-csi-mock-volumes-8717 Mar 22 00:40:14.178: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-provisioner-role-cfg Mar 22 00:40:14.184: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-resizer Mar 22 00:40:14.190: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8717 Mar 22 00:40:14.200: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8717 Mar 22 00:40:14.208: INFO: deleting *v1.Role: csi-mock-volumes-8717-4183/external-resizer-cfg-csi-mock-volumes-8717 Mar 22 00:40:14.214: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8717-4183/csi-resizer-role-cfg Mar 22 00:40:14.252: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-snapshotter Mar 22 00:40:14.281: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8717 Mar 22 00:40:14.314: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8717 Mar 22 00:40:14.326: INFO: deleting *v1.Role: csi-mock-volumes-8717-4183/external-snapshotter-leaderelection-csi-mock-volumes-8717 Mar 22 00:40:14.333: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8717-4183/external-snapshotter-leaderelection Mar 22 00:40:14.339: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8717-4183/csi-mock Mar 22 00:40:14.345: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8717 Mar 22 00:40:14.386: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8717 Mar 22 00:40:14.394: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8717 Mar 22 00:40:14.418: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8717 Mar 22 00:40:14.437: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8717 Mar 22 00:40:14.443: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8717 Mar 22 00:40:14.473: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8717 Mar 22 00:40:14.516: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8717-4183/csi-mockplugin Mar 22 00:40:14.528: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8717-4183/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8717-4183 STEP: Waiting for namespaces [csi-mock-volumes-8717-4183] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:40:58.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:100.805 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":133,"completed":61,"skipped":3875,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:40:58.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 22 00:41:28.690: INFO: Deleting pod "pv-3449"/"pod-ephm-test-projected-6rgz" Mar 22 00:41:28.690: INFO: Deleting pod "pod-ephm-test-projected-6rgz" in namespace "pv-3449" Mar 22 00:41:28.698: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-6rgz" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:41:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3449" for this suite. • [SLOW TEST:38.170 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":133,"completed":62,"skipped":3884,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:41:36.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-32e28030-2536-4204-9688-cc9e81c7d925 STEP: Creating a pod to test consume configMaps Mar 22 00:41:36.913: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963" in namespace "projected-2838" to be "Succeeded or Failed" Mar 22 00:41:36.958: INFO: Pod "pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963": Phase="Pending", Reason="", readiness=false. Elapsed: 45.490592ms Mar 22 00:41:38.963: INFO: Pod "pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050144207s Mar 22 00:41:40.967: INFO: Pod "pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054351266s STEP: Saw pod success Mar 22 00:41:40.967: INFO: Pod "pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963" satisfied condition "Succeeded or Failed" Mar 22 00:41:40.970: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963 container agnhost-container: STEP: delete the pod Mar 22 00:41:41.147: INFO: Waiting for pod pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963 to disappear Mar 22 00:41:41.180: INFO: Pod pod-projected-configmaps-5ffa2757-0c2a-43a7-a183-753902824963 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:41:41.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2838" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":63,"skipped":3913,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:41:41.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 22 00:42:11.331: INFO: Deleting pod "pv-6153"/"pod-ephm-test-projected-g6jj" Mar 22 00:42:11.331: INFO: Deleting pod "pod-ephm-test-projected-g6jj" in namespace "pv-6153" Mar 22 00:42:11.337: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-g6jj" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:15.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6153" for this suite. • [SLOW TEST:34.211 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":133,"completed":64,"skipped":3918,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:15.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6" Mar 22 00:42:19.567: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6 && dd if=/dev/zero of=/tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6/file] Namespace:persistent-local-volumes-test-6884 PodName:hostexec-latest-worker-rtkzw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:19.568: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:19.776: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6884 PodName:hostexec-latest-worker-rtkzw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:19.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:42:19.890: INFO: Creating a PV followed by a PVC Mar 22 00:42:19.910: INFO: Waiting for PV local-pvd24jh to bind to PVC pvc-fvnhf Mar 22 00:42:19.911: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-fvnhf] to have phase Bound Mar 22 00:42:19.958: INFO: PersistentVolumeClaim pvc-fvnhf found but phase is Pending instead of Bound. Mar 22 00:42:21.963: INFO: PersistentVolumeClaim pvc-fvnhf found and phase=Bound (2.052323449s) Mar 22 00:42:21.963: INFO: Waiting up to 3m0s for PersistentVolume local-pvd24jh to have phase Bound Mar 22 00:42:21.966: INFO: PersistentVolume local-pvd24jh found and phase=Bound (3.127608ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:42:26.002: INFO: pod "pod-050d1bd8-e92b-45f8-85c3-23fd6ec5e478" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:42:26.002: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6884 PodName:pod-050d1bd8-e92b-45f8-85c3-23fd6ec5e478 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:26.002: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:26.105: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000075 seconds, 234.4KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:42:26.105: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6884 PodName:pod-050d1bd8-e92b-45f8-85c3-23fd6ec5e478 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:26.105: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:26.197: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-050d1bd8-e92b-45f8-85c3-23fd6ec5e478 in namespace persistent-local-volumes-test-6884 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:42:26.203: INFO: Deleting PersistentVolumeClaim "pvc-fvnhf" Mar 22 00:42:26.231: INFO: Deleting PersistentVolume "local-pvd24jh" Mar 22 00:42:26.267: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6884 PodName:hostexec-latest-worker-rtkzw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:26.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6/file Mar 22 00:42:26.385: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6884 PodName:hostexec-latest-worker-rtkzw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:26.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6 Mar 22 00:42:26.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ab9cb870-1e8b-40c5-9c66-5e1b9996e9f6] Namespace:persistent-local-volumes-test-6884 PodName:hostexec-latest-worker-rtkzw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:26.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:26.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6884" for this suite. • [SLOW TEST:11.200 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":65,"skipped":3990,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:26.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 22 00:42:26.731: INFO: Waiting up to 5m0s for pod "pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114" in namespace "emptydir-5963" to be "Succeeded or Failed" Mar 22 00:42:26.763: INFO: Pod "pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114": Phase="Pending", Reason="", readiness=false. Elapsed: 31.416704ms Mar 22 00:42:29.000: INFO: Pod "pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268905987s Mar 22 00:42:31.005: INFO: Pod "pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.273784651s STEP: Saw pod success Mar 22 00:42:31.005: INFO: Pod "pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114" satisfied condition "Succeeded or Failed" Mar 22 00:42:31.008: INFO: Trying to get logs from node latest-worker pod pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114 container test-container: STEP: delete the pod Mar 22 00:42:31.249: INFO: Waiting for pod pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114 to disappear Mar 22 00:42:31.342: INFO: Pod pod-a39e4ac0-7d1a-47ee-b4cb-dc3e5851a114 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:31.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5963" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":133,"completed":66,"skipped":4112,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv3 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:31.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 22 00:42:31.513: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:31.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-6306" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.161 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:31.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:42:35.713: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf-backend && ln -s /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf-backend /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf] Namespace:persistent-local-volumes-test-8818 PodName:hostexec-latest-worker-cx2kk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:35.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:42:35.824: INFO: Creating a PV followed by a PVC Mar 22 00:42:35.836: INFO: Waiting for PV local-pv82849 to bind to PVC pvc-pql7s Mar 22 00:42:35.836: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-pql7s] to have phase Bound Mar 22 00:42:35.862: INFO: PersistentVolumeClaim pvc-pql7s found but phase is Pending instead of Bound. Mar 22 00:42:37.866: INFO: PersistentVolumeClaim pvc-pql7s found but phase is Pending instead of Bound. Mar 22 00:42:39.869: INFO: PersistentVolumeClaim pvc-pql7s found but phase is Pending instead of Bound. Mar 22 00:42:41.874: INFO: PersistentVolumeClaim pvc-pql7s found and phase=Bound (6.037473561s) Mar 22 00:42:41.874: INFO: Waiting up to 3m0s for PersistentVolume local-pv82849 to have phase Bound Mar 22 00:42:41.877: INFO: PersistentVolume local-pv82849 found and phase=Bound (2.992442ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:42:45.933: INFO: pod "pod-fc65f40d-a538-48b3-9ca8-a9d2ca993bae" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:42:45.933: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8818 PodName:pod-fc65f40d-a538-48b3-9ca8-a9d2ca993bae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:45.933: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:46.023: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:42:46.023: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8818 PodName:pod-fc65f40d-a538-48b3-9ca8-a9d2ca993bae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:46.023: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:46.112: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:42:50.326: INFO: pod "pod-7a964e72-2ab2-4853-80a1-2e8390ae3ed9" created on Node "latest-worker" Mar 22 00:42:50.326: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8818 PodName:pod-7a964e72-2ab2-4853-80a1-2e8390ae3ed9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:50.326: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:50.428: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:42:50.428: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8818 PodName:pod-7a964e72-2ab2-4853-80a1-2e8390ae3ed9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:50.428: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:50.527: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:42:50.527: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8818 PodName:pod-fc65f40d-a538-48b3-9ca8-a9d2ca993bae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:42:50.527: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:50.609: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-fc65f40d-a538-48b3-9ca8-a9d2ca993bae in namespace persistent-local-volumes-test-8818 STEP: Deleting pod2 STEP: Deleting pod pod-7a964e72-2ab2-4853-80a1-2e8390ae3ed9 in namespace persistent-local-volumes-test-8818 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:42:50.671: INFO: Deleting PersistentVolumeClaim "pvc-pql7s" Mar 22 00:42:51.091: INFO: Deleting PersistentVolume "local-pv82849" STEP: Removing the test directory Mar 22 00:42:51.127: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf && rm -r /tmp/local-volume-test-a81dc481-9053-423f-8949-38855c16a5cf-backend] Namespace:persistent-local-volumes-test-8818 PodName:hostexec-latest-worker-cx2kk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:51.127: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:51.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8818" for this suite. • [SLOW TEST:19.763 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":67,"skipped":4385,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:51.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e" Mar 22 00:42:56.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e && dd if=/dev/zero of=/tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e/file] Namespace:persistent-local-volumes-test-5689 PodName:hostexec-latest-worker-fwwcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:56.481: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:42:56.619: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5689 PodName:hostexec-latest-worker-fwwcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:42:56.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:42:56.724: INFO: Creating a PV followed by a PVC Mar 22 00:42:56.760: INFO: Waiting for PV local-pvc7mxc to bind to PVC pvc-rw52d Mar 22 00:42:56.760: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-rw52d] to have phase Bound Mar 22 00:42:56.832: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:42:58.837: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:00.841: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:02.845: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:04.848: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:06.853: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:08.856: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:10.860: INFO: PersistentVolumeClaim pvc-rw52d found but phase is Pending instead of Bound. Mar 22 00:43:12.864: INFO: PersistentVolumeClaim pvc-rw52d found and phase=Bound (16.104761809s) Mar 22 00:43:12.864: INFO: Waiting up to 3m0s for PersistentVolume local-pvc7mxc to have phase Bound Mar 22 00:43:12.867: INFO: PersistentVolume local-pvc7mxc found and phase=Bound (2.406856ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:43:18.984: INFO: pod "pod-1f15f5b5-1f89-49b3-a83c-5084aea1dc03" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:43:18.984: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5689 PodName:pod-1f15f5b5-1f89-49b3-a83c-5084aea1dc03 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:43:18.984: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:43:19.099: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:43:19.100: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5689 PodName:pod-1f15f5b5-1f89-49b3-a83c-5084aea1dc03 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:43:19.100: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:43:19.197: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:43:19.197: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5689 PodName:pod-1f15f5b5-1f89-49b3-a83c-5084aea1dc03 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:43:19.197: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:43:19.309: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-1f15f5b5-1f89-49b3-a83c-5084aea1dc03 in namespace persistent-local-volumes-test-5689 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:43:19.317: INFO: Deleting PersistentVolumeClaim "pvc-rw52d" Mar 22 00:43:19.354: INFO: Deleting PersistentVolume "local-pvc7mxc" Mar 22 00:43:19.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5689 PodName:hostexec-latest-worker-fwwcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:43:19.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e/file Mar 22 00:43:19.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5689 PodName:hostexec-latest-worker-fwwcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:43:19.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e Mar 22 00:43:19.586: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a6ace511-cbbb-459f-a008-d4338a11b66e] Namespace:persistent-local-volumes-test-5689 PodName:hostexec-latest-worker-fwwcz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:43:19.586: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:43:19.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5689" for this suite. • [SLOW TEST:28.450 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":68,"skipped":4423,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSS ------------------------------ [sig-storage] Pod Disks should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:43:19.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Mar 22 00:43:19.865: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:43:19.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1071" for this suite. S [SKIPPING] [0.122 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:450 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:43:19.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-1906 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:43:20.426: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-attacher Mar 22 00:43:20.429: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1906 Mar 22 00:43:20.429: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1906 Mar 22 00:43:20.464: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1906 Mar 22 00:43:20.506: INFO: creating *v1.Role: csi-mock-volumes-1906-6149/external-attacher-cfg-csi-mock-volumes-1906 Mar 22 00:43:20.552: INFO: creating *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-attacher-role-cfg Mar 22 00:43:20.559: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-provisioner Mar 22 00:43:20.577: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1906 Mar 22 00:43:20.577: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1906 Mar 22 00:43:20.602: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1906 Mar 22 00:43:20.647: INFO: creating *v1.Role: csi-mock-volumes-1906-6149/external-provisioner-cfg-csi-mock-volumes-1906 Mar 22 00:43:20.870: INFO: creating *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-provisioner-role-cfg Mar 22 00:43:20.921: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-resizer Mar 22 00:43:21.049: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1906 Mar 22 00:43:21.049: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1906 Mar 22 00:43:21.056: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1906 Mar 22 00:43:21.093: INFO: creating *v1.Role: csi-mock-volumes-1906-6149/external-resizer-cfg-csi-mock-volumes-1906 Mar 22 00:43:21.098: INFO: creating *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-resizer-role-cfg Mar 22 00:43:21.121: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-snapshotter Mar 22 00:43:21.128: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1906 Mar 22 00:43:21.128: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1906 Mar 22 00:43:21.204: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1906 Mar 22 00:43:21.208: INFO: creating *v1.Role: csi-mock-volumes-1906-6149/external-snapshotter-leaderelection-csi-mock-volumes-1906 Mar 22 00:43:21.215: INFO: creating *v1.RoleBinding: csi-mock-volumes-1906-6149/external-snapshotter-leaderelection Mar 22 00:43:21.240: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-mock Mar 22 00:43:21.287: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1906 Mar 22 00:43:21.293: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1906 Mar 22 00:43:21.330: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1906 Mar 22 00:43:21.334: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1906 Mar 22 00:43:21.341: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1906 Mar 22 00:43:21.361: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1906 Mar 22 00:43:21.376: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1906 Mar 22 00:43:21.382: INFO: creating *v1.StatefulSet: csi-mock-volumes-1906-6149/csi-mockplugin Mar 22 00:43:21.389: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1906 Mar 22 00:43:21.409: INFO: creating *v1.StatefulSet: csi-mock-volumes-1906-6149/csi-mockplugin-resizer Mar 22 00:43:21.470: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1906" Mar 22 00:43:21.473: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1906 to register on node latest-worker STEP: Creating pod Mar 22 00:43:31.239: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:43:31.267: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-kw59q] to have phase Bound Mar 22 00:43:31.281: INFO: PersistentVolumeClaim pvc-kw59q found but phase is Pending instead of Bound. Mar 22 00:43:33.284: INFO: PersistentVolumeClaim pvc-kw59q found and phase=Bound (2.01711757s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-hw4xd Mar 22 00:43:39.333: INFO: Deleting pod "pvc-volume-tester-hw4xd" in namespace "csi-mock-volumes-1906" Mar 22 00:43:39.338: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hw4xd" to be fully deleted STEP: Deleting claim pvc-kw59q Mar 22 00:43:57.358: INFO: Waiting up to 2m0s for PersistentVolume pvc-15d80300-60fa-401c-9567-4f2f051d2b71 to get deleted Mar 22 00:43:57.397: INFO: PersistentVolume pvc-15d80300-60fa-401c-9567-4f2f051d2b71 found and phase=Bound (38.937235ms) Mar 22 00:43:59.401: INFO: PersistentVolume pvc-15d80300-60fa-401c-9567-4f2f051d2b71 was removed STEP: Deleting storageclass csi-mock-volumes-1906-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1906 STEP: Waiting for namespaces [csi-mock-volumes-1906] to vanish STEP: uninstalling csi mock driver Mar 22 00:44:05.453: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-attacher Mar 22 00:44:05.728: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1906 Mar 22 00:44:05.930: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1906 Mar 22 00:44:06.021: INFO: deleting *v1.Role: csi-mock-volumes-1906-6149/external-attacher-cfg-csi-mock-volumes-1906 Mar 22 00:44:06.035: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-attacher-role-cfg Mar 22 00:44:06.041: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-provisioner Mar 22 00:44:06.061: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1906 Mar 22 00:44:06.328: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1906 Mar 22 00:44:06.341: INFO: deleting *v1.Role: csi-mock-volumes-1906-6149/external-provisioner-cfg-csi-mock-volumes-1906 Mar 22 00:44:06.359: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-provisioner-role-cfg Mar 22 00:44:06.378: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-resizer Mar 22 00:44:06.501: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1906 Mar 22 00:44:06.528: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1906 Mar 22 00:44:06.587: INFO: deleting *v1.Role: csi-mock-volumes-1906-6149/external-resizer-cfg-csi-mock-volumes-1906 Mar 22 00:44:06.645: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1906-6149/csi-resizer-role-cfg Mar 22 00:44:06.652: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-snapshotter Mar 22 00:44:06.658: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1906 Mar 22 00:44:06.683: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1906 Mar 22 00:44:06.718: INFO: deleting *v1.Role: csi-mock-volumes-1906-6149/external-snapshotter-leaderelection-csi-mock-volumes-1906 Mar 22 00:44:06.799: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1906-6149/external-snapshotter-leaderelection Mar 22 00:44:06.842: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1906-6149/csi-mock Mar 22 00:44:06.869: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1906 Mar 22 00:44:06.914: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1906 Mar 22 00:44:06.927: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1906 Mar 22 00:44:06.934: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1906 Mar 22 00:44:06.940: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1906 Mar 22 00:44:06.945: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1906 Mar 22 00:44:06.951: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1906 Mar 22 00:44:06.976: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1906-6149/csi-mockplugin Mar 22 00:44:06.993: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1906 Mar 22 00:44:07.006: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1906-6149/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-1906-6149 STEP: Waiting for namespaces [csi-mock-volumes-1906-6149] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:44:59.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:99.168 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":133,"completed":69,"skipped":4455,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:44:59.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:45:03.217: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bfff7424-0965-46e5-a33e-c14ce7531908-backend && ln -s /tmp/local-volume-test-bfff7424-0965-46e5-a33e-c14ce7531908-backend /tmp/local-volume-test-bfff7424-0965-46e5-a33e-c14ce7531908] Namespace:persistent-local-volumes-test-6842 PodName:hostexec-latest-worker-dbh8j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:03.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:45:03.324: INFO: Creating a PV followed by a PVC Mar 22 00:45:03.335: INFO: Waiting for PV local-pv855s8 to bind to PVC pvc-qhgns Mar 22 00:45:03.335: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qhgns] to have phase Bound Mar 22 00:45:03.341: INFO: PersistentVolumeClaim pvc-qhgns found but phase is Pending instead of Bound. Mar 22 00:45:05.345: INFO: PersistentVolumeClaim pvc-qhgns found but phase is Pending instead of Bound. Mar 22 00:45:07.349: INFO: PersistentVolumeClaim pvc-qhgns found but phase is Pending instead of Bound. Mar 22 00:45:09.353: INFO: PersistentVolumeClaim pvc-qhgns found but phase is Pending instead of Bound. Mar 22 00:45:11.357: INFO: PersistentVolumeClaim pvc-qhgns found but phase is Pending instead of Bound. Mar 22 00:45:13.361: INFO: PersistentVolumeClaim pvc-qhgns found and phase=Bound (10.025888406s) Mar 22 00:45:13.361: INFO: Waiting up to 3m0s for PersistentVolume local-pv855s8 to have phase Bound Mar 22 00:45:13.363: INFO: PersistentVolume local-pv855s8 found and phase=Bound (2.528996ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:45:17.391: INFO: pod "pod-3ebbaf01-ccc4-4d3f-91b9-eb02416f3c9e" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:45:17.391: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6842 PodName:pod-3ebbaf01-ccc4-4d3f-91b9-eb02416f3c9e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:45:17.391: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:17.516: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:45:17.516: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6842 PodName:pod-3ebbaf01-ccc4-4d3f-91b9-eb02416f3c9e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:45:17.516: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:17.618: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3ebbaf01-ccc4-4d3f-91b9-eb02416f3c9e in namespace persistent-local-volumes-test-6842 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:45:17.624: INFO: Deleting PersistentVolumeClaim "pvc-qhgns" Mar 22 00:45:17.650: INFO: Deleting PersistentVolume "local-pv855s8" STEP: Removing the test directory Mar 22 00:45:17.665: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bfff7424-0965-46e5-a33e-c14ce7531908 && rm -r /tmp/local-volume-test-bfff7424-0965-46e5-a33e-c14ce7531908-backend] Namespace:persistent-local-volumes-test-6842 PodName:hostexec-latest-worker-dbh8j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:17.665: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:17.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6842" for this suite. • [SLOW TEST:18.807 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":70,"skipped":4541,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:17.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057" Mar 22 00:45:21.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057" "/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057"] Namespace:persistent-local-volumes-test-109 PodName:hostexec-latest-worker2-bvqhf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:21.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:45:22.130: INFO: Creating a PV followed by a PVC Mar 22 00:45:22.254: INFO: Waiting for PV local-pv6trkh to bind to PVC pvc-t8qc6 Mar 22 00:45:22.254: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-t8qc6] to have phase Bound Mar 22 00:45:22.258: INFO: PersistentVolumeClaim pvc-t8qc6 found but phase is Pending instead of Bound. Mar 22 00:45:24.272: INFO: PersistentVolumeClaim pvc-t8qc6 found but phase is Pending instead of Bound. Mar 22 00:45:26.276: INFO: PersistentVolumeClaim pvc-t8qc6 found but phase is Pending instead of Bound. Mar 22 00:45:28.280: INFO: PersistentVolumeClaim pvc-t8qc6 found and phase=Bound (6.025254255s) Mar 22 00:45:28.280: INFO: Waiting up to 3m0s for PersistentVolume local-pv6trkh to have phase Bound Mar 22 00:45:28.283: INFO: PersistentVolume local-pv6trkh found and phase=Bound (3.254657ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:45:28.289: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:45:28.290: INFO: Deleting PersistentVolumeClaim "pvc-t8qc6" Mar 22 00:45:28.295: INFO: Deleting PersistentVolume "local-pv6trkh" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057" Mar 22 00:45:28.308: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057"] Namespace:persistent-local-volumes-test-109 PodName:hostexec-latest-worker2-bvqhf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:28.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 22 00:45:28.521: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-28357495-0d3e-4556-83b6-04c9c2518057] Namespace:persistent-local-volumes-test-109 PodName:hostexec-latest-worker2-bvqhf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:28.521: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:28.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-109" for this suite. S [SKIPPING] [10.822 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:28.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-ab6bbbd6-cc48-4731-a16e-52b949c06137 STEP: Creating a pod to test consume configMaps Mar 22 00:45:28.823: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c" in namespace "configmap-7515" to be "Succeeded or Failed" Mar 22 00:45:28.863: INFO: Pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.830072ms Mar 22 00:45:30.867: INFO: Pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044157485s Mar 22 00:45:32.872: INFO: Pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c": Phase="Running", Reason="", readiness=true. Elapsed: 4.049035082s Mar 22 00:45:34.875: INFO: Pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05224521s STEP: Saw pod success Mar 22 00:45:34.875: INFO: Pod "pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c" satisfied condition "Succeeded or Failed" Mar 22 00:45:34.878: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c container agnhost-container: STEP: delete the pod Mar 22 00:45:34.932: INFO: Waiting for pod pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c to disappear Mar 22 00:45:34.947: INFO: Pod pod-configmaps-5c1847f0-1279-4ca2-a431-7649928a622c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:34.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7515" for this suite. • [SLOW TEST:6.281 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":71,"skipped":4598,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:34.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7" Mar 22 00:45:39.183: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7 && dd if=/dev/zero of=/tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7/file] Namespace:persistent-local-volumes-test-9699 PodName:hostexec-latest-worker-lb8vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:39.183: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:39.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9699 PodName:hostexec-latest-worker-lb8vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:45:39.810: INFO: Creating a PV followed by a PVC Mar 22 00:45:39.896: INFO: Waiting for PV local-pv6j4tn to bind to PVC pvc-xpnjn Mar 22 00:45:39.897: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xpnjn] to have phase Bound Mar 22 00:45:39.899: INFO: PersistentVolumeClaim pvc-xpnjn found but phase is Pending instead of Bound. Mar 22 00:45:41.903: INFO: PersistentVolumeClaim pvc-xpnjn found and phase=Bound (2.006438696s) Mar 22 00:45:41.903: INFO: Waiting up to 3m0s for PersistentVolume local-pv6j4tn to have phase Bound Mar 22 00:45:41.907: INFO: PersistentVolume local-pv6j4tn found and phase=Bound (3.769235ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:45:45.935: INFO: pod "pod-15da34fe-cd59-4183-a6e1-ac91acfce97b" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:45:45.935: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9699 PodName:pod-15da34fe-cd59-4183-a6e1-ac91acfce97b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:45:45.935: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:46.039: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000125 seconds, 140.6KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:45:46.039: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-9699 PodName:pod-15da34fe-cd59-4183-a6e1-ac91acfce97b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:45:46.039: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:46.132: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Mar 22 00:45:46.132: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9699 PodName:pod-15da34fe-cd59-4183-a6e1-ac91acfce97b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:45:46.132: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:46.224: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000083 seconds, 129.4KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-15da34fe-cd59-4183-a6e1-ac91acfce97b in namespace persistent-local-volumes-test-9699 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:45:46.229: INFO: Deleting PersistentVolumeClaim "pvc-xpnjn" Mar 22 00:45:46.262: INFO: Deleting PersistentVolume "local-pv6j4tn" Mar 22 00:45:46.295: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9699 PodName:hostexec-latest-worker-lb8vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:46.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7/file Mar 22 00:45:46.408: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9699 PodName:hostexec-latest-worker-lb8vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:46.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7 Mar 22 00:45:46.483: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-56d44fc9-0576-4422-b4d0-53d8a95a8ba7] Namespace:persistent-local-volumes-test-9699 PodName:hostexec-latest-worker-lb8vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:46.483: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:46.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9699" for this suite. • [SLOW TEST:11.740 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":72,"skipped":4713,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:46.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 22 00:45:46.765: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:46.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8686" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.089 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:46.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Mar 22 00:45:47.179: INFO: Waiting up to 5m0s for pod "pod-5bdad439-5866-441d-9a6d-ab633669b533" in namespace "emptydir-8812" to be "Succeeded or Failed" Mar 22 00:45:47.183: INFO: Pod "pod-5bdad439-5866-441d-9a6d-ab633669b533": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907227ms Mar 22 00:45:49.554: INFO: Pod "pod-5bdad439-5866-441d-9a6d-ab633669b533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375592797s Mar 22 00:45:51.592: INFO: Pod "pod-5bdad439-5866-441d-9a6d-ab633669b533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413157919s Mar 22 00:45:53.632: INFO: Pod "pod-5bdad439-5866-441d-9a6d-ab633669b533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45335028s STEP: Saw pod success Mar 22 00:45:53.632: INFO: Pod "pod-5bdad439-5866-441d-9a6d-ab633669b533" satisfied condition "Succeeded or Failed" Mar 22 00:45:53.801: INFO: Trying to get logs from node latest-worker2 pod pod-5bdad439-5866-441d-9a6d-ab633669b533 container test-container: STEP: delete the pod Mar 22 00:45:54.460: INFO: Waiting for pod pod-5bdad439-5866-441d-9a6d-ab633669b533 to disappear Mar 22 00:45:54.474: INFO: Pod pod-5bdad439-5866-441d-9a6d-ab633669b533 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:54.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8812" for this suite. • [SLOW TEST:7.767 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":133,"completed":73,"skipped":4837,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:54.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0" Mar 22 00:45:59.165: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0 && dd if=/dev/zero of=/tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0/file] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:59.165: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:59.376: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:59.376: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:45:59.482: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0 && chmod o+rwx /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:45:59.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:45:59.903: INFO: Creating a PV followed by a PVC Mar 22 00:45:59.979: INFO: Waiting for PV local-pvpdq88 to bind to PVC pvc-ww7dp Mar 22 00:45:59.979: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ww7dp] to have phase Bound Mar 22 00:46:00.022: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:02.026: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:04.030: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:06.033: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:08.038: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:10.042: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:12.190: INFO: PersistentVolumeClaim pvc-ww7dp found but phase is Pending instead of Bound. Mar 22 00:46:14.197: INFO: PersistentVolumeClaim pvc-ww7dp found and phase=Bound (14.217438655s) Mar 22 00:46:14.197: INFO: Waiting up to 3m0s for PersistentVolume local-pvpdq88 to have phase Bound Mar 22 00:46:14.200: INFO: PersistentVolume local-pvpdq88 found and phase=Bound (2.732218ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:46:20.308: INFO: pod "pod-95f058b8-8149-4ac3-b077-8ed8a4a4e1dd" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:46:20.308: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3285 PodName:pod-95f058b8-8149-4ac3-b077-8ed8a4a4e1dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:46:20.308: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:46:20.414: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:46:20.414: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3285 PodName:pod-95f058b8-8149-4ac3-b077-8ed8a4a4e1dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:46:20.414: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:46:20.803: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-95f058b8-8149-4ac3-b077-8ed8a4a4e1dd in namespace persistent-local-volumes-test-3285 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:46:20.957: INFO: Deleting PersistentVolumeClaim "pvc-ww7dp" Mar 22 00:46:21.101: INFO: Deleting PersistentVolume "local-pvpdq88" Mar 22 00:46:21.506: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:46:21.506: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:46:21.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:46:21.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0/file Mar 22 00:46:21.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:46:21.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0 Mar 22 00:46:21.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f8713f91-e79b-404a-9fee-61b0d5eb57f0] Namespace:persistent-local-volumes-test-3285 PodName:hostexec-latest-worker2-7s2x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:46:21.824: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:46:22.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3285" for this suite. • [SLOW TEST:27.512 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":74,"skipped":4851,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:46:22.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-2719 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 22 00:46:22.869: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-attacher Mar 22 00:46:22.914: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2719 Mar 22 00:46:22.914: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2719 Mar 22 00:46:22.926: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2719 Mar 22 00:46:22.974: INFO: creating *v1.Role: csi-mock-volumes-2719-9150/external-attacher-cfg-csi-mock-volumes-2719 Mar 22 00:46:23.022: INFO: creating *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-attacher-role-cfg Mar 22 00:46:23.155: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-provisioner Mar 22 00:46:23.184: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2719 Mar 22 00:46:23.184: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2719 Mar 22 00:46:23.327: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2719 Mar 22 00:46:23.364: INFO: creating *v1.Role: csi-mock-volumes-2719-9150/external-provisioner-cfg-csi-mock-volumes-2719 Mar 22 00:46:23.425: INFO: creating *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-provisioner-role-cfg Mar 22 00:46:23.556: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-resizer Mar 22 00:46:23.669: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2719 Mar 22 00:46:23.669: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2719 Mar 22 00:46:23.706: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2719 Mar 22 00:46:23.803: INFO: creating *v1.Role: csi-mock-volumes-2719-9150/external-resizer-cfg-csi-mock-volumes-2719 Mar 22 00:46:23.812: INFO: creating *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-resizer-role-cfg Mar 22 00:46:23.853: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-snapshotter Mar 22 00:46:23.860: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2719 Mar 22 00:46:23.860: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2719 Mar 22 00:46:23.866: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2719 Mar 22 00:46:23.886: INFO: creating *v1.Role: csi-mock-volumes-2719-9150/external-snapshotter-leaderelection-csi-mock-volumes-2719 Mar 22 00:46:23.894: INFO: creating *v1.RoleBinding: csi-mock-volumes-2719-9150/external-snapshotter-leaderelection Mar 22 00:46:23.950: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-mock Mar 22 00:46:23.954: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2719 Mar 22 00:46:23.961: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2719 Mar 22 00:46:23.966: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2719 Mar 22 00:46:24.007: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2719 Mar 22 00:46:24.033: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2719 Mar 22 00:46:24.129: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2719 Mar 22 00:46:24.134: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2719 Mar 22 00:46:24.140: INFO: creating *v1.StatefulSet: csi-mock-volumes-2719-9150/csi-mockplugin Mar 22 00:46:24.163: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2719 Mar 22 00:46:24.387: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2719" Mar 22 00:46:24.555: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2719 to register on node latest-worker2 I0322 00:46:43.400031 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0322 00:46:43.402252 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2719","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:46:43.447004 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0322 00:46:43.492685 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0322 00:46:44.062711 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2719","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0322 00:46:44.754125 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2719"},"Error":"","FullError":null} STEP: Creating pod Mar 22 00:46:53.253: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0322 00:46:55.222058 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0322 00:46:55.818799 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c"}}},"Error":"","FullError":null} I0322 00:46:59.195720 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:46:59.199: INFO: >>> kubeConfig: /root/.kube/config I0322 00:46:59.658253 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c","storage.kubernetes.io/csiProvisionerIdentity":"1616374003493-8081-csi-mock-csi-mock-volumes-2719"}},"Response":{},"Error":"","FullError":null} I0322 00:46:59.922125 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 22 00:46:59.929: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:47:00.395: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:47:00.580: INFO: >>> kubeConfig: /root/.kube/config I0322 00:47:00.756414 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/globalmount","target_path":"/var/lib/kubelet/pods/204a745d-b386-406f-9feb-ceddf8fd05b6/volumes/kubernetes.io~csi/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c","storage.kubernetes.io/csiProvisionerIdentity":"1616374003493-8081-csi-mock-csi-mock-volumes-2719"}},"Response":{},"Error":"","FullError":null} I0322 00:47:11.861967 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0322 00:47:11.864619 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/204a745d-b386-406f-9feb-ceddf8fd05b6/volumes/kubernetes.io~csi/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Mar 22 00:47:15.334: INFO: Deleting pod "pvc-volume-tester-mnv2f" in namespace "csi-mock-volumes-2719" Mar 22 00:47:15.467: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mnv2f" to be fully deleted Mar 22 00:47:22.883: INFO: >>> kubeConfig: /root/.kube/config I0322 00:47:23.174577 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/204a745d-b386-406f-9feb-ceddf8fd05b6/volumes/kubernetes.io~csi/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/mount"},"Response":{},"Error":"","FullError":null} I0322 00:47:23.186919 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0322 00:47:23.188820 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c/globalmount"},"Response":{},"Error":"","FullError":null} I0322 00:47:35.542311 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 22 00:47:36.515: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7002685", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00356e510), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00356e528)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0040f1130), VolumeMode:(*v1.PersistentVolumeMode)(0xc0040f1140), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.515: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7002692", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fa300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fa318)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030fa330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030fa348)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004b4f790), VolumeMode:(*v1.PersistentVolumeMode)(0xc004b4f7a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.516: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7002693", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2719", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d488), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d4a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d4b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d4d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d4e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d500)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003ba8cd0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ba8ce0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.516: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7002717", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2719", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d530), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d548)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d578)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d590), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d5a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c", StorageClassName:(*string)(0xc003ba8d10), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ba8d20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.516: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7002720", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2719", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d5d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d5f0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d608), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d620)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d650)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c", StorageClassName:(*string)(0xc003ba8d60), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ba8d70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.516: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7003053", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc004d4d680), DeletionGracePeriodSeconds:(*int64)(0xc002c13a48), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2719", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d698), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d6b0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d6c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d6e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d6f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d710)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c", StorageClassName:(*string)(0xc003ba8db0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ba8dd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 22 00:47:36.516: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6zmvh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2719", SelfLink:"", UID:"938758ff-4666-4c9a-b8fa-c4deaf338b7c", ResourceVersion:"7003054", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751970813, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc004d4d740), DeletionGracePeriodSeconds:(*int64)(0xc002c13ca8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2719", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d758), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d770)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d788), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d7a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004d4d7b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004d4d7d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-938758ff-4666-4c9a-b8fa-c4deaf338b7c", StorageClassName:(*string)(0xc003ba8e10), VolumeMode:(*v1.PersistentVolumeMode)(0xc003ba8e20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-mnv2f Mar 22 00:47:36.517: INFO: Deleting pod "pvc-volume-tester-mnv2f" in namespace "csi-mock-volumes-2719" STEP: Deleting claim pvc-6zmvh STEP: Deleting storageclass csi-mock-volumes-2719-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2719 STEP: Waiting for namespaces [csi-mock-volumes-2719] to vanish STEP: uninstalling csi mock driver Mar 22 00:47:42.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-attacher Mar 22 00:47:42.586: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2719 Mar 22 00:47:42.617: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2719 Mar 22 00:47:42.631: INFO: deleting *v1.Role: csi-mock-volumes-2719-9150/external-attacher-cfg-csi-mock-volumes-2719 Mar 22 00:47:42.641: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-attacher-role-cfg Mar 22 00:47:42.649: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-provisioner Mar 22 00:47:42.655: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2719 Mar 22 00:47:42.683: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2719 Mar 22 00:47:42.704: INFO: deleting *v1.Role: csi-mock-volumes-2719-9150/external-provisioner-cfg-csi-mock-volumes-2719 Mar 22 00:47:42.710: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-provisioner-role-cfg Mar 22 00:47:42.735: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-resizer Mar 22 00:47:42.740: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2719 Mar 22 00:47:42.755: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2719 Mar 22 00:47:42.767: INFO: deleting *v1.Role: csi-mock-volumes-2719-9150/external-resizer-cfg-csi-mock-volumes-2719 Mar 22 00:47:42.775: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2719-9150/csi-resizer-role-cfg Mar 22 00:47:42.781: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-snapshotter Mar 22 00:47:42.787: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2719 Mar 22 00:47:42.818: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2719 Mar 22 00:47:42.862: INFO: deleting *v1.Role: csi-mock-volumes-2719-9150/external-snapshotter-leaderelection-csi-mock-volumes-2719 Mar 22 00:47:42.877: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2719-9150/external-snapshotter-leaderelection Mar 22 00:47:42.883: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2719-9150/csi-mock Mar 22 00:47:42.907: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2719 Mar 22 00:47:42.917: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2719 Mar 22 00:47:42.924: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2719 Mar 22 00:47:42.931: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2719 Mar 22 00:47:42.936: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2719 Mar 22 00:47:42.955: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2719 Mar 22 00:47:42.995: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2719 Mar 22 00:47:43.003: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2719-9150/csi-mockplugin Mar 22 00:47:43.009: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2719 STEP: deleting the driver namespace: csi-mock-volumes-2719-9150 STEP: Waiting for namespaces [csi-mock-volumes-2719-9150] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:48:39.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:136.963 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":133,"completed":75,"skipped":4879,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:48:39.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Mar 22 00:48:39.139: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:48:39.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2803" for this suite. S [SKIPPING] [0.114 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:48:39.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:48:43.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48-backend && ln -s /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48-backend /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48] Namespace:persistent-local-volumes-test-6148 PodName:hostexec-latest-worker-srpc2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:48:43.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:48:43.434: INFO: Creating a PV followed by a PVC Mar 22 00:48:43.446: INFO: Waiting for PV local-pv4l6fw to bind to PVC pvc-sdt26 Mar 22 00:48:43.446: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-sdt26] to have phase Bound Mar 22 00:48:43.476: INFO: PersistentVolumeClaim pvc-sdt26 found but phase is Pending instead of Bound. Mar 22 00:48:45.481: INFO: PersistentVolumeClaim pvc-sdt26 found and phase=Bound (2.034280397s) Mar 22 00:48:45.481: INFO: Waiting up to 3m0s for PersistentVolume local-pv4l6fw to have phase Bound Mar 22 00:48:45.483: INFO: PersistentVolume local-pv4l6fw found and phase=Bound (2.026129ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:48:49.539: INFO: pod "pod-fe707c51-7588-4cef-8afe-3ed1995cd6be" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:48:49.539: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6148 PodName:pod-fe707c51-7588-4cef-8afe-3ed1995cd6be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:48:49.539: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:48:49.639: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:48:49.639: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6148 PodName:pod-fe707c51-7588-4cef-8afe-3ed1995cd6be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:48:49.639: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:48:49.741: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:48:49.741: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6148 PodName:pod-fe707c51-7588-4cef-8afe-3ed1995cd6be ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:48:49.741: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:48:49.837: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fe707c51-7588-4cef-8afe-3ed1995cd6be in namespace persistent-local-volumes-test-6148 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:48:49.843: INFO: Deleting PersistentVolumeClaim "pvc-sdt26" Mar 22 00:48:49.879: INFO: Deleting PersistentVolume "local-pv4l6fw" STEP: Removing the test directory Mar 22 00:48:49.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48 && rm -r /tmp/local-volume-test-bbc20143-4aa8-4231-b311-007cdcf1ea48-backend] Namespace:persistent-local-volumes-test-6148 PodName:hostexec-latest-worker-srpc2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:48:49.896: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:48:50.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6148" for this suite. • [SLOW TEST:10.912 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":76,"skipped":4904,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:48:50.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:48:54.189: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d && mount --bind /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d] Namespace:persistent-local-volumes-test-9029 PodName:hostexec-latest-worker2-bkkf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:48:54.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:48:54.311: INFO: Creating a PV followed by a PVC Mar 22 00:48:54.333: INFO: Waiting for PV local-pvrrqps to bind to PVC pvc-852rr Mar 22 00:48:54.333: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-852rr] to have phase Bound Mar 22 00:48:54.371: INFO: PersistentVolumeClaim pvc-852rr found but phase is Pending instead of Bound. Mar 22 00:48:56.383: INFO: PersistentVolumeClaim pvc-852rr found but phase is Pending instead of Bound. Mar 22 00:48:58.386: INFO: PersistentVolumeClaim pvc-852rr found and phase=Bound (4.053518047s) Mar 22 00:48:58.386: INFO: Waiting up to 3m0s for PersistentVolume local-pvrrqps to have phase Bound Mar 22 00:48:58.389: INFO: PersistentVolume local-pvrrqps found and phase=Bound (2.789198ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:49:02.425: INFO: pod "pod-dd28b843-0bd5-47e9-92a8-d1e2b7236ebe" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:49:02.425: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9029 PodName:pod-dd28b843-0bd5-47e9-92a8-d1e2b7236ebe ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:02.425: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:02.530: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 22 00:49:02.530: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9029 PodName:pod-dd28b843-0bd5-47e9-92a8-d1e2b7236ebe ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:02.530: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:02.636: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 22 00:49:02.636: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9029 PodName:pod-dd28b843-0bd5-47e9-92a8-d1e2b7236ebe ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:02.636: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:02.742: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-dd28b843-0bd5-47e9-92a8-d1e2b7236ebe in namespace persistent-local-volumes-test-9029 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:49:02.784: INFO: Deleting PersistentVolumeClaim "pvc-852rr" Mar 22 00:49:02.806: INFO: Deleting PersistentVolume "local-pvrrqps" STEP: Removing the test directory Mar 22 00:49:02.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d && rm -r /tmp/local-volume-test-e29a126d-f613-4dee-8edd-c9d13bae033d] Namespace:persistent-local-volumes-test-9029 PodName:hostexec-latest-worker2-bkkf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:02.818: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:02.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9029" for this suite. • [SLOW TEST:12.940 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":77,"skipped":4945,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:03.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6" Mar 22 00:49:07.183: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6 && dd if=/dev/zero of=/tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6/file] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:07.183: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:07.367: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:07.367: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:07.483: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6 && chmod o+rwx /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:07.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:49:07.895: INFO: Creating a PV followed by a PVC Mar 22 00:49:07.934: INFO: Waiting for PV local-pvjj78p to bind to PVC pvc-n5bjd Mar 22 00:49:07.934: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-n5bjd] to have phase Bound Mar 22 00:49:07.937: INFO: PersistentVolumeClaim pvc-n5bjd found but phase is Pending instead of Bound. Mar 22 00:49:09.940: INFO: PersistentVolumeClaim pvc-n5bjd found but phase is Pending instead of Bound. Mar 22 00:49:11.944: INFO: PersistentVolumeClaim pvc-n5bjd found and phase=Bound (4.010171512s) Mar 22 00:49:11.944: INFO: Waiting up to 3m0s for PersistentVolume local-pvjj78p to have phase Bound Mar 22 00:49:11.947: INFO: PersistentVolume local-pvjj78p found and phase=Bound (2.940685ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 22 00:49:11.953: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:49:11.954: INFO: Deleting PersistentVolumeClaim "pvc-n5bjd" Mar 22 00:49:11.959: INFO: Deleting PersistentVolume "local-pvjj78p" Mar 22 00:49:11.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:11.970: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:12.177: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:12.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6/file Mar 22 00:49:12.281: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:12.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6 Mar 22 00:49:12.399: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb871a2e-8540-4a2f-9603-6db9012db6c6] Namespace:persistent-local-volumes-test-7924 PodName:hostexec-latest-worker2-sst89 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:12.399: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:12.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7924" for this suite. S [SKIPPING] [9.509 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:12.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Mar 22 00:49:12.613: INFO: Waiting up to 5m0s for pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121" in namespace "projected-5211" to be "Succeeded or Failed" Mar 22 00:49:12.621: INFO: Pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345584ms Mar 22 00:49:14.625: INFO: Pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012399508s Mar 22 00:49:16.630: INFO: Pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121": Phase="Running", Reason="", readiness=true. Elapsed: 4.017346186s Mar 22 00:49:18.634: INFO: Pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020767048s STEP: Saw pod success Mar 22 00:49:18.634: INFO: Pod "metadata-volume-f010d845-3280-4c67-b881-2b104b238121" satisfied condition "Succeeded or Failed" Mar 22 00:49:18.637: INFO: Trying to get logs from node latest-worker pod metadata-volume-f010d845-3280-4c67-b881-2b104b238121 container client-container: STEP: delete the pod Mar 22 00:49:18.685: INFO: Waiting for pod metadata-volume-f010d845-3280-4c67-b881-2b104b238121 to disappear Mar 22 00:49:18.712: INFO: Pod metadata-volume-f010d845-3280-4c67-b881-2b104b238121 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:18.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5211" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":78,"skipped":5041,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:18.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:49:22.804: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-fb0ceb3c-333d-4328-98f2-b14dcad7820f-backend && ln -s /tmp/local-volume-test-fb0ceb3c-333d-4328-98f2-b14dcad7820f-backend /tmp/local-volume-test-fb0ceb3c-333d-4328-98f2-b14dcad7820f] Namespace:persistent-local-volumes-test-1211 PodName:hostexec-latest-worker2-jxp2r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:22.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:49:22.911: INFO: Creating a PV followed by a PVC Mar 22 00:49:22.934: INFO: Waiting for PV local-pv84wpn to bind to PVC pvc-qpgxc Mar 22 00:49:22.934: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qpgxc] to have phase Bound Mar 22 00:49:22.970: INFO: PersistentVolumeClaim pvc-qpgxc found but phase is Pending instead of Bound. Mar 22 00:49:24.973: INFO: PersistentVolumeClaim pvc-qpgxc found and phase=Bound (2.03912082s) Mar 22 00:49:24.973: INFO: Waiting up to 3m0s for PersistentVolume local-pv84wpn to have phase Bound Mar 22 00:49:24.976: INFO: PersistentVolume local-pv84wpn found and phase=Bound (2.451353ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:49:31.035: INFO: pod "pod-f68b57d1-b208-4f0f-a3d6-b37189fed9a7" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:49:31.036: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1211 PodName:pod-f68b57d1-b208-4f0f-a3d6-b37189fed9a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:31.036: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:31.132: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:49:31.132: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1211 PodName:pod-f68b57d1-b208-4f0f-a3d6-b37189fed9a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:31.132: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:31.232: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f68b57d1-b208-4f0f-a3d6-b37189fed9a7 in namespace persistent-local-volumes-test-1211 STEP: Creating pod2 STEP: Creating a pod Mar 22 00:49:35.551: INFO: pod "pod-6b949b8a-e7ff-407c-a11b-10b1cf42d002" created on Node "latest-worker2" STEP: Reading in pod2 Mar 22 00:49:35.551: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1211 PodName:pod-6b949b8a-e7ff-407c-a11b-10b1cf42d002 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:35.551: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:35.663: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-6b949b8a-e7ff-407c-a11b-10b1cf42d002 in namespace persistent-local-volumes-test-1211 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:49:35.674: INFO: Deleting PersistentVolumeClaim "pvc-qpgxc" Mar 22 00:49:35.696: INFO: Deleting PersistentVolume "local-pv84wpn" STEP: Removing the test directory Mar 22 00:49:35.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb0ceb3c-333d-4328-98f2-b14dcad7820f && rm -r /tmp/local-volume-test-fb0ceb3c-333d-4328-98f2-b14dcad7820f-backend] Namespace:persistent-local-volumes-test-1211 PodName:hostexec-latest-worker2-jxp2r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:35.721: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:35.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1211" for this suite. • [SLOW TEST:17.119 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":79,"skipped":5234,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should schedule pods in the same zones as statically provisioned PVs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:35.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 22 00:49:35.983: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:35.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-4387" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.173 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:36.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a" Mar 22 00:49:40.209: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a && dd if=/dev/zero of=/tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a/file] Namespace:persistent-local-volumes-test-3553 PodName:hostexec-latest-worker2-4bw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:40.209: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:40.360: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3553 PodName:hostexec-latest-worker2-4bw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:40.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:49:40.482: INFO: Creating a PV followed by a PVC Mar 22 00:49:40.533: INFO: Waiting for PV local-pvg4h9k to bind to PVC pvc-v9s8j Mar 22 00:49:40.533: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-v9s8j] to have phase Bound Mar 22 00:49:40.549: INFO: PersistentVolumeClaim pvc-v9s8j found but phase is Pending instead of Bound. Mar 22 00:49:42.553: INFO: PersistentVolumeClaim pvc-v9s8j found and phase=Bound (2.020052999s) Mar 22 00:49:42.553: INFO: Waiting up to 3m0s for PersistentVolume local-pvg4h9k to have phase Bound Mar 22 00:49:42.556: INFO: PersistentVolume local-pvg4h9k found and phase=Bound (2.530164ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 22 00:49:46.583: INFO: pod "pod-5d99645e-8d8b-49d4-862e-81b082c183ae" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:49:46.583: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3553 PodName:pod-5d99645e-8d8b-49d4-862e-81b082c183ae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:46.583: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:46.682: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:49:46.682: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3553 PodName:pod-5d99645e-8d8b-49d4-862e-81b082c183ae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:46.682: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:46.779: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 22 00:49:50.829: INFO: pod "pod-f6249cc2-eb27-4666-a70f-0ea7f2a70dea" created on Node "latest-worker2" Mar 22 00:49:50.829: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3553 PodName:pod-f6249cc2-eb27-4666-a70f-0ea7f2a70dea ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:50.829: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:50.965: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 22 00:49:50.965: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3553 PodName:pod-f6249cc2-eb27-4666-a70f-0ea7f2a70dea ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:50.965: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:51.063: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 22 00:49:51.063: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3553 PodName:pod-5d99645e-8d8b-49d4-862e-81b082c183ae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:49:51.063: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:49:51.161: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5d99645e-8d8b-49d4-862e-81b082c183ae in namespace persistent-local-volumes-test-3553 STEP: Deleting pod2 STEP: Deleting pod pod-f6249cc2-eb27-4666-a70f-0ea7f2a70dea in namespace persistent-local-volumes-test-3553 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:49:51.240: INFO: Deleting PersistentVolumeClaim "pvc-v9s8j" Mar 22 00:49:51.245: INFO: Deleting PersistentVolume "local-pvg4h9k" Mar 22 00:49:51.251: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3553 PodName:hostexec-latest-worker2-4bw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:51.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a/file Mar 22 00:49:51.352: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3553 PodName:hostexec-latest-worker2-4bw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:51.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a Mar 22 00:49:51.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9d50353b-6a7d-497b-911d-61a96853b63a] Namespace:persistent-local-volumes-test-3553 PodName:hostexec-latest-worker2-4bw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:51.465: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:49:51.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3553" for this suite. • [SLOW TEST:15.587 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":80,"skipped":5303,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:51.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 22 00:49:55.771: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-58b18885-7d8b-4f92-8ca4-23e873fc73fd] Namespace:persistent-local-volumes-test-1425 PodName:hostexec-latest-worker2-4cj9t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:55.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:49:55.898: INFO: Creating a PV followed by a PVC Mar 22 00:49:55.925: INFO: Waiting for PV local-pvxwmcm to bind to PVC pvc-884ln Mar 22 00:49:55.925: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-884ln] to have phase Bound Mar 22 00:49:55.939: INFO: PersistentVolumeClaim pvc-884ln found but phase is Pending instead of Bound. Mar 22 00:49:57.943: INFO: PersistentVolumeClaim pvc-884ln found and phase=Bound (2.017954237s) Mar 22 00:49:57.943: INFO: Waiting up to 3m0s for PersistentVolume local-pvxwmcm to have phase Bound Mar 22 00:49:57.945: INFO: PersistentVolume local-pvxwmcm found and phase=Bound (2.144559ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 22 00:49:57.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-076041d5-925c-4a0c-8c46-503d16d7e88f] Namespace:persistent-local-volumes-test-1425 PodName:hostexec-latest-worker2-4cj9t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:49:57.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:49:58.037: INFO: Creating a PV followed by a PVC Mar 22 00:49:58.126: INFO: Waiting for PV local-pvmpb9v to bind to PVC pvc-v68r9 Mar 22 00:49:58.126: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-v68r9] to have phase Bound Mar 22 00:49:58.181: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:00.185: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:02.189: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:04.193: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:06.195: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:08.198: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:10.202: INFO: PersistentVolumeClaim pvc-v68r9 found but phase is Pending instead of Bound. Mar 22 00:50:12.206: INFO: PersistentVolumeClaim pvc-v68r9 found and phase=Bound (14.080060965s) Mar 22 00:50:12.206: INFO: Waiting up to 3m0s for PersistentVolume local-pvmpb9v to have phase Bound Mar 22 00:50:12.209: INFO: PersistentVolume local-pvmpb9v found and phase=Bound (2.814205ms) Mar 22 00:50:12.219: INFO: Waiting up to 5m0s for pod "pod-dc6e71fd-d2d4-49ae-959a-1b9dd901eb8d" in namespace "persistent-local-volumes-test-1425" to be "Unschedulable" Mar 22 00:50:12.259: INFO: Pod "pod-dc6e71fd-d2d4-49ae-959a-1b9dd901eb8d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.985009ms Mar 22 00:50:14.263: INFO: Pod "pod-dc6e71fd-d2d4-49ae-959a-1b9dd901eb8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044714042s Mar 22 00:50:14.263: INFO: Pod "pod-dc6e71fd-d2d4-49ae-959a-1b9dd901eb8d" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 22 00:50:14.263: INFO: Deleting PersistentVolumeClaim "pvc-884ln" Mar 22 00:50:14.272: INFO: Deleting PersistentVolume "local-pvxwmcm" STEP: Removing the test directory Mar 22 00:50:14.289: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-58b18885-7d8b-4f92-8ca4-23e873fc73fd] Namespace:persistent-local-volumes-test-1425 PodName:hostexec-latest-worker2-4cj9t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:14.289: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:14.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1425" for this suite. • [SLOW TEST:22.886 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":133,"completed":81,"skipped":5305,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:14.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878" Mar 22 00:50:20.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878 && dd if=/dev/zero of=/tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878/file] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:20.872: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:21.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:21.052: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:21.147: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878 && chmod o+rwx /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:21.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:50:21.543: INFO: Creating a PV followed by a PVC Mar 22 00:50:21.557: INFO: Waiting for PV local-pv4dxcm to bind to PVC pvc-dts44 Mar 22 00:50:21.557: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-dts44] to have phase Bound Mar 22 00:50:21.577: INFO: PersistentVolumeClaim pvc-dts44 found but phase is Pending instead of Bound. Mar 22 00:50:23.581: INFO: PersistentVolumeClaim pvc-dts44 found but phase is Pending instead of Bound. Mar 22 00:50:25.588: INFO: PersistentVolumeClaim pvc-dts44 found but phase is Pending instead of Bound. Mar 22 00:50:27.606: INFO: PersistentVolumeClaim pvc-dts44 found and phase=Bound (6.048684646s) Mar 22 00:50:27.606: INFO: Waiting up to 3m0s for PersistentVolume local-pv4dxcm to have phase Bound Mar 22 00:50:27.608: INFO: PersistentVolume local-pv4dxcm found and phase=Bound (2.430261ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:50:33.695: INFO: pod "pod-4229cc4b-8a8b-4b5e-9392-d2b3435b0502" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:50:33.695: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9248 PodName:pod-4229cc4b-8a8b-4b5e-9392-d2b3435b0502 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:50:33.695: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:33.875: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 22 00:50:33.875: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9248 PodName:pod-4229cc4b-8a8b-4b5e-9392-d2b3435b0502 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:50:33.875: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:33.987: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-4229cc4b-8a8b-4b5e-9392-d2b3435b0502 in namespace persistent-local-volumes-test-9248 STEP: Creating pod2 STEP: Creating a pod Mar 22 00:50:40.110: INFO: pod "pod-5e419135-843a-4cdf-9dc8-58d37ea4e146" created on Node "latest-worker2" STEP: Reading in pod2 Mar 22 00:50:40.110: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9248 PodName:pod-5e419135-843a-4cdf-9dc8-58d37ea4e146 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:50:40.110: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:40.199: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-5e419135-843a-4cdf-9dc8-58d37ea4e146 in namespace persistent-local-volumes-test-9248 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:50:40.205: INFO: Deleting PersistentVolumeClaim "pvc-dts44" Mar 22 00:50:40.271: INFO: Deleting PersistentVolume "local-pv4dxcm" Mar 22 00:50:40.289: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:40.289: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:50:40.514: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:40.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878/file Mar 22 00:50:40.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:40.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878 Mar 22 00:50:40.720: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ec8fc7c7-f4c8-41a1-9690-eb83c7372878] Namespace:persistent-local-volumes-test-9248 PodName:hostexec-latest-worker2-hbfg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:50:40.720: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:41.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9248" for this suite. • [SLOW TEST:26.617 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":82,"skipped":5309,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:41.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-7630 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 22 00:50:41.893: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-attacher Mar 22 00:50:41.897: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7630 Mar 22 00:50:41.897: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7630 Mar 22 00:50:41.911: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7630 Mar 22 00:50:41.941: INFO: creating *v1.Role: csi-mock-volumes-7630-1837/external-attacher-cfg-csi-mock-volumes-7630 Mar 22 00:50:41.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-attacher-role-cfg Mar 22 00:50:42.032: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-provisioner Mar 22 00:50:42.067: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7630 Mar 22 00:50:42.067: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7630 Mar 22 00:50:42.277: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7630 Mar 22 00:50:42.409: INFO: creating *v1.Role: csi-mock-volumes-7630-1837/external-provisioner-cfg-csi-mock-volumes-7630 Mar 22 00:50:42.435: INFO: creating *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-provisioner-role-cfg Mar 22 00:50:42.464: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-resizer Mar 22 00:50:42.504: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7630 Mar 22 00:50:42.504: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7630 Mar 22 00:50:42.535: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7630 Mar 22 00:50:42.556: INFO: creating *v1.Role: csi-mock-volumes-7630-1837/external-resizer-cfg-csi-mock-volumes-7630 Mar 22 00:50:42.572: INFO: creating *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-resizer-role-cfg Mar 22 00:50:42.587: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-snapshotter Mar 22 00:50:42.594: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7630 Mar 22 00:50:42.594: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7630 Mar 22 00:50:42.600: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7630 Mar 22 00:50:42.620: INFO: creating *v1.Role: csi-mock-volumes-7630-1837/external-snapshotter-leaderelection-csi-mock-volumes-7630 Mar 22 00:50:42.654: INFO: creating *v1.RoleBinding: csi-mock-volumes-7630-1837/external-snapshotter-leaderelection Mar 22 00:50:42.676: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-mock Mar 22 00:50:42.695: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7630 Mar 22 00:50:42.701: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7630 Mar 22 00:50:42.707: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7630 Mar 22 00:50:42.724: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7630 Mar 22 00:50:42.744: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7630 Mar 22 00:50:42.798: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7630 Mar 22 00:50:42.802: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7630 Mar 22 00:50:42.864: INFO: creating *v1.StatefulSet: csi-mock-volumes-7630-1837/csi-mockplugin Mar 22 00:50:42.870: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7630 Mar 22 00:50:42.924: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7630" Mar 22 00:50:42.934: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7630 to register on node latest-worker STEP: Creating pod with fsGroup Mar 22 00:50:57.650: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 22 00:50:57.655: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-htwg2] to have phase Bound Mar 22 00:50:57.660: INFO: PersistentVolumeClaim pvc-htwg2 found but phase is Pending instead of Bound. Mar 22 00:50:59.664: INFO: PersistentVolumeClaim pvc-htwg2 found and phase=Bound (2.008846054s) Mar 22 00:51:05.723: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-7630] Namespace:csi-mock-volumes-7630 PodName:pvc-volume-tester-lbwg6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:51:05.723: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:51:05.828: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-7630/csi-mock-volumes-7630'; sync] Namespace:csi-mock-volumes-7630 PodName:pvc-volume-tester-lbwg6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:51:05.828: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:51:59.520: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-7630/csi-mock-volumes-7630] Namespace:csi-mock-volumes-7630 PodName:pvc-volume-tester-lbwg6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:51:59.520: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:51:59.677: INFO: pod csi-mock-volumes-7630/pvc-volume-tester-lbwg6 exec for cmd ls -l /mnt/test/csi-mock-volumes-7630/csi-mock-volumes-7630, stdout: -rw-r--r-- 1 root root 13 Mar 22 00:51 /mnt/test/csi-mock-volumes-7630/csi-mock-volumes-7630, stderr: Mar 22 00:51:59.677: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-7630] Namespace:csi-mock-volumes-7630 PodName:pvc-volume-tester-lbwg6 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:51:59.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-lbwg6 Mar 22 00:51:59.882: INFO: Deleting pod "pvc-volume-tester-lbwg6" in namespace "csi-mock-volumes-7630" Mar 22 00:51:59.969: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lbwg6" to be fully deleted STEP: Deleting claim pvc-htwg2 Mar 22 00:52:36.055: INFO: Waiting up to 2m0s for PersistentVolume pvc-a1a275f2-039c-43bd-80cc-a37f1cb08814 to get deleted Mar 22 00:52:36.081: INFO: PersistentVolume pvc-a1a275f2-039c-43bd-80cc-a37f1cb08814 found and phase=Bound (26.411828ms) Mar 22 00:52:38.086: INFO: PersistentVolume pvc-a1a275f2-039c-43bd-80cc-a37f1cb08814 was removed STEP: Deleting storageclass csi-mock-volumes-7630-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7630 STEP: Waiting for namespaces [csi-mock-volumes-7630] to vanish STEP: uninstalling csi mock driver Mar 22 00:52:44.112: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-attacher Mar 22 00:52:44.119: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7630 Mar 22 00:52:44.161: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7630 Mar 22 00:52:44.169: INFO: deleting *v1.Role: csi-mock-volumes-7630-1837/external-attacher-cfg-csi-mock-volumes-7630 Mar 22 00:52:44.174: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-attacher-role-cfg Mar 22 00:52:44.180: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-provisioner Mar 22 00:52:44.186: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7630 Mar 22 00:52:44.192: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7630 Mar 22 00:52:44.203: INFO: deleting *v1.Role: csi-mock-volumes-7630-1837/external-provisioner-cfg-csi-mock-volumes-7630 Mar 22 00:52:44.233: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-provisioner-role-cfg Mar 22 00:52:44.240: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-resizer Mar 22 00:52:44.269: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7630 Mar 22 00:52:44.295: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7630 Mar 22 00:52:44.305: INFO: deleting *v1.Role: csi-mock-volumes-7630-1837/external-resizer-cfg-csi-mock-volumes-7630 Mar 22 00:52:44.362: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7630-1837/csi-resizer-role-cfg Mar 22 00:52:44.366: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-snapshotter Mar 22 00:52:44.377: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7630 Mar 22 00:52:44.388: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7630 Mar 22 00:52:44.398: INFO: deleting *v1.Role: csi-mock-volumes-7630-1837/external-snapshotter-leaderelection-csi-mock-volumes-7630 Mar 22 00:52:44.405: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7630-1837/external-snapshotter-leaderelection Mar 22 00:52:44.431: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7630-1837/csi-mock Mar 22 00:52:44.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7630 Mar 22 00:52:44.456: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7630 Mar 22 00:52:44.483: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7630 Mar 22 00:52:44.492: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7630 Mar 22 00:52:44.498: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7630 Mar 22 00:52:44.504: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7630 Mar 22 00:52:44.510: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7630 Mar 22 00:52:44.515: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7630-1837/csi-mockplugin Mar 22 00:52:44.522: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7630 STEP: deleting the driver namespace: csi-mock-volumes-7630-1837 STEP: Waiting for namespaces [csi-mock-volumes-7630-1837] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:53:28.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:167.485 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":133,"completed":83,"skipped":5319,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:53:28.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Mar 22 00:53:34.729: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a StorageClass volume-provisioning-707-glusterdptest STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-707 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-707-glusterdptest,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Mar 22 00:53:34.883: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-kjt6d] to have phase Bound Mar 22 00:53:34.904: INFO: PersistentVolumeClaim pvc-kjt6d found but phase is Pending instead of Bound. Mar 22 00:53:36.932: INFO: PersistentVolumeClaim pvc-kjt6d found and phase=Bound (2.049159494s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-707"/"pvc-kjt6d" STEP: deleting the claim's PV "pvc-41bf6373-2884-4e84-b254-3057b7c813c7" Mar 22 00:53:36.955: INFO: Waiting up to 20m0s for PersistentVolume pvc-41bf6373-2884-4e84-b254-3057b7c813c7 to get deleted Mar 22 00:53:36.989: INFO: PersistentVolume pvc-41bf6373-2884-4e84-b254-3057b7c813c7 found and phase=Bound (33.987498ms) Mar 22 00:53:41.993: INFO: PersistentVolume pvc-41bf6373-2884-4e84-b254-3057b7c813c7 was removed Mar 22 00:53:41.993: INFO: deleting claim "volume-provisioning-707"/"pvc-kjt6d" Mar 22 00:53:41.996: INFO: deleting storage class volume-provisioning-707-glusterdptest [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:53:42.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-707" for this suite. • [SLOW TEST:13.489 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":133,"completed":84,"skipped":5327,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:53:42.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:53:46.202: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend && mount --bind /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend && ln -s /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802] Namespace:persistent-local-volumes-test-1652 PodName:hostexec-latest-worker-jrpth ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:53:46.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:53:46.345: INFO: Creating a PV followed by a PVC Mar 22 00:53:46.359: INFO: Waiting for PV local-pv8lksh to bind to PVC pvc-5trrq Mar 22 00:53:46.359: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-5trrq] to have phase Bound Mar 22 00:53:46.419: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:48.425: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:50.429: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:52.432: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:54.436: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:56.448: INFO: PersistentVolumeClaim pvc-5trrq found but phase is Pending instead of Bound. Mar 22 00:53:58.478: INFO: PersistentVolumeClaim pvc-5trrq found and phase=Bound (12.118673246s) Mar 22 00:53:58.478: INFO: Waiting up to 3m0s for PersistentVolume local-pv8lksh to have phase Bound Mar 22 00:53:58.481: INFO: PersistentVolume local-pv8lksh found and phase=Bound (3.282931ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:54:04.629: INFO: pod "pod-2b3f36db-9a49-41b2-a142-bd94f1b820aa" created on Node "latest-worker" STEP: Writing in pod1 Mar 22 00:54:04.630: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1652 PodName:pod-2b3f36db-9a49-41b2-a142-bd94f1b820aa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:54:04.630: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:54:04.746: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:54:04.746: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1652 PodName:pod-2b3f36db-9a49-41b2-a142-bd94f1b820aa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:54:04.746: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:54:04.844: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2b3f36db-9a49-41b2-a142-bd94f1b820aa in namespace persistent-local-volumes-test-1652 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:54:04.848: INFO: Deleting PersistentVolumeClaim "pvc-5trrq" Mar 22 00:54:04.889: INFO: Deleting PersistentVolume "local-pv8lksh" STEP: Removing the test directory Mar 22 00:54:04.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802 && umount /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend && rm -r /tmp/local-volume-test-a2ee5b2f-04d8-4122-ae5a-53740cbe9802-backend] Namespace:persistent-local-volumes-test-1652 PodName:hostexec-latest-worker-jrpth ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:54:04.940: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:54:05.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1652" for this suite. • [SLOW TEST:23.019 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":85,"skipped":5598,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:54:05.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 22 00:54:09.245: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-9f16ffb6-b049-4d7b-86e1-547584acaf26 && mount --bind /tmp/local-volume-test-9f16ffb6-b049-4d7b-86e1-547584acaf26 /tmp/local-volume-test-9f16ffb6-b049-4d7b-86e1-547584acaf26] Namespace:persistent-local-volumes-test-3661 PodName:hostexec-latest-worker2-cxp9p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:54:09.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 22 00:54:09.391: INFO: Creating a PV followed by a PVC Mar 22 00:54:09.403: INFO: Waiting for PV local-pvqkc66 to bind to PVC pvc-8xl8l Mar 22 00:54:09.403: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8xl8l] to have phase Bound Mar 22 00:54:09.499: INFO: PersistentVolumeClaim pvc-8xl8l found but phase is Pending instead of Bound. Mar 22 00:54:11.534: INFO: PersistentVolumeClaim pvc-8xl8l found but phase is Pending instead of Bound. Mar 22 00:54:13.538: INFO: PersistentVolumeClaim pvc-8xl8l found and phase=Bound (4.135562125s) Mar 22 00:54:13.538: INFO: Waiting up to 3m0s for PersistentVolume local-pvqkc66 to have phase Bound Mar 22 00:54:13.542: INFO: PersistentVolume local-pvqkc66 found and phase=Bound (3.174077ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 22 00:54:17.613: INFO: pod "pod-d3cb56de-ac2d-470e-b038-c82e46f0173b" created on Node "latest-worker2" STEP: Writing in pod1 Mar 22 00:54:17.613: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3661 PodName:pod-d3cb56de-ac2d-470e-b038-c82e46f0173b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:54:17.613: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:54:17.708: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 22 00:54:17.708: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3661 PodName:pod-d3cb56de-ac2d-470e-b038-c82e46f0173b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:54:17.708: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:54:17.814: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d3cb56de-ac2d-470e-b038-c82e46f0173b in namespace persistent-local-volumes-test-3661 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 22 00:54:17.833: INFO: Deleting PersistentVolumeClaim "pvc-8xl8l" Mar 22 00:54:17.860: INFO: Deleting PersistentVolume "local-pvqkc66" STEP: Removing the test directory Mar 22 00:54:17.871: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-9f16ffb6-b049-4d7b-86e1-547584acaf26 && rm -r /tmp/local-volume-test-9f16ffb6-b049-4d7b-86e1-547584acaf26] Namespace:persistent-local-volumes-test-3661 PodName:hostexec-latest-worker2-cxp9p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 22 00:54:17.871: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:54:18.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3661" for this suite. • [SLOW TEST:13.155 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":86,"skipped":5617,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSMar 22 00:54:18.259: INFO: Running AfterSuite actions on all nodes Mar 22 00:54:18.259: INFO: Running AfterSuite actions on node 1 Mar 22 00:54:18.259: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage/junit_01.xml {"msg":"Test Suite completed","total":133,"completed":86,"skipped":5641,"failed":10,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} Summarizing 10 Failures: [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 [Fail] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] [AfterEach] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:114 [Fail] [sig-storage] PersistentVolumes-local [BeforeEach] Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 [Fail] [sig-storage] CSI mock volume CSI workload information using mock driver [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 [Fail] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:496 [Fail] [sig-storage] PersistentVolumes-local [BeforeEach] [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 [Fail] [sig-storage] PersistentVolumes-local [BeforeEach] [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:160 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 Ran 96 of 5737 Specs in 4887.730 seconds FAIL! -- 86 Passed | 10 Failed | 0 Pending | 5641 Skipped --- FAIL: TestE2E (4887.83s) FAIL