I0604 00:22:21.182026 24 e2e.go:129] Starting e2e run "8fb01ced-0b3d-45f4-abc3-12b0438ad48f" on Ginkgo node 1 {"msg":"Test Suite starting","total":21,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654302139 - Will randomize all specs Will run 21 of 5773 specs Jun 4 00:22:21.262: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:22:21.267: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 4 00:22:21.295: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 4 00:22:21.369: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 4 00:22:21.369: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 4 00:22:21.369: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 4 00:22:21.369: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 4 00:22:21.369: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 4 00:22:21.382: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 4 00:22:21.382: INFO: e2e test version: v1.21.9 Jun 4 00:22:21.383: INFO: kube-apiserver version: v1.21.1 Jun 4 00:22:21.384: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:22:21.390: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:21.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0604 00:22:21.426887 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 4 00:22:21.427: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 4 00:22:21.430: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:22:23.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8253 PodName:hostexec-node2-czdcr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:23.465: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:22:23.553: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:22:23.553: INFO: exec node2: stdout: "0\n" Jun 4 00:22:23.553: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:22:23.553: INFO: exec node2: exit code: 0 Jun 4 00:22:23.553: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:22:23.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8253" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.164 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:23.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Jun 4 00:22:23.601: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:22:23.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6262" for this suite. S [SKIPPING] [0.047 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:23.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:22:27.665: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6266 PodName:hostexec-node1-v9ngt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:27.665: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:22:27.774: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:22:27.774: INFO: exec node1: stdout: "0\n" Jun 4 00:22:27.774: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:22:27.774: INFO: exec node1: exit code: 0 Jun 4 00:22:27.774: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:22:27.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6266" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.172 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:27.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:22:31.833: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-364 PodName:hostexec-node1-h9q9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:31.833: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:22:31.925: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:22:31.925: INFO: exec node1: stdout: "0\n" Jun 4 00:22:31.925: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:22:31.925: INFO: exec node1: exit code: 0 Jun 4 00:22:31.925: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:22:31.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-364" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.150 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:31.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:22:31.970: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:22:31.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4903" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:22:31.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c" Jun 4 00:22:34.036: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c" "/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e" Jun 4 00:22:34.127: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e" "/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643" Jun 4 00:22:34.214: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643" "/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4" Jun 4 00:22:34.308: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4" "/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e" Jun 4 00:22:34.397: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e" "/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a" Jun 4 00:22:34.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a" "/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add" Jun 4 00:22:34.575: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add" "/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a" Jun 4 00:22:34.668: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a" "/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07" Jun 4 00:22:34.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07" "/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7" Jun 4 00:22:34.876: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7" "/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:34.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311" Jun 4 00:22:36.996: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311" "/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:36.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b" Jun 4 00:22:37.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b" "/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415" Jun 4 00:22:37.183: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415" "/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95" Jun 4 00:22:37.269: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95" "/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46" Jun 4 00:22:37.354: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46" "/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae" Jun 4 00:22:37.439: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae" "/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c" Jun 4 00:22:37.527: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c" "/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8" Jun 4 00:22:37.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8" "/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c" Jun 4 00:22:37.713: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c" "/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345" Jun 4 00:22:37.802: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345" "/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:22:37.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Jun 4 00:22:43.104: INFO: Deleting pod pod-d66e9e83-2a62-4591-896d-5676d8358b58 Jun 4 00:22:43.110: INFO: Deleting PersistentVolumeClaim "pvc-42zwz" Jun 4 00:22:43.114: INFO: Deleting PersistentVolumeClaim "pvc-bss44" Jun 4 00:22:43.118: INFO: Deleting PersistentVolumeClaim "pvc-k925z" Jun 4 00:22:43.121: INFO: 1/28 pods finished STEP: Delete "local-pvwdmwg" and create a new PV for same local volume storage STEP: Delete "local-pvjzmm4" and create a new PV for same local volume storage STEP: Delete "local-pv2sszh" and create a new PV for same local volume storage Jun 4 00:22:45.104: INFO: Deleting pod pod-9ddcb672-54c8-4429-8b86-0bb2bee382ed Jun 4 00:22:45.112: INFO: Deleting PersistentVolumeClaim "pvc-kmkrs" Jun 4 00:22:45.116: INFO: Deleting PersistentVolumeClaim "pvc-kzwkg" Jun 4 00:22:45.119: INFO: Deleting PersistentVolumeClaim "pvc-zg2fl" Jun 4 00:22:45.122: INFO: 2/28 pods finished STEP: Delete "local-pvgb296" and create a new PV for same local volume storage STEP: Delete "local-pvkh7dv" and create a new PV for same local volume storage STEP: Delete "local-pvp6zhc" and create a new PV for same local volume storage Jun 4 00:22:46.104: INFO: Deleting pod pod-67151e98-09e9-42ab-a81e-c9a4cb911613 Jun 4 00:22:46.110: INFO: Deleting PersistentVolumeClaim "pvc-rblhn" Jun 4 00:22:46.115: INFO: Deleting PersistentVolumeClaim "pvc-t78mw" Jun 4 00:22:46.119: INFO: Deleting PersistentVolumeClaim "pvc-2k2g2" Jun 4 00:22:46.122: INFO: 3/28 pods finished Jun 4 00:22:46.122: INFO: Deleting pod pod-a2f43df1-7b0f-4f5a-92d6-10fe4929dbc5 Jun 4 00:22:46.130: INFO: Deleting PersistentVolumeClaim "pvc-9xgtc" STEP: Delete "local-pv42lhb" and create a new PV for same local volume storage Jun 4 00:22:46.133: INFO: Deleting PersistentVolumeClaim "pvc-9jnrx" Jun 4 00:22:46.137: INFO: Deleting PersistentVolumeClaim "pvc-jct5p" Jun 4 00:22:46.141: INFO: 4/28 pods finished Jun 4 00:22:46.141: INFO: Deleting pod pod-f0748fb6-88e3-48b2-9ac5-b9a2a8d8d21b STEP: Delete "local-pvrqnrd" and create a new PV for same local volume storage Jun 4 00:22:46.147: INFO: Deleting PersistentVolumeClaim "pvc-8q9h2" Jun 4 00:22:46.151: INFO: Deleting PersistentVolumeClaim "pvc-cpgsp" STEP: Delete "local-pvc44c5" and create a new PV for same local volume storage Jun 4 00:22:46.154: INFO: Deleting PersistentVolumeClaim "pvc-ptr6c" Jun 4 00:22:46.158: INFO: 5/28 pods finished STEP: Delete "local-pv4mb2r" and create a new PV for same local volume storage STEP: Delete "local-pvvvntn" and create a new PV for same local volume storage STEP: Delete "local-pvsmf94" and create a new PV for same local volume storage STEP: Delete "local-pv779ch" and create a new PV for same local volume storage STEP: Delete "local-pvxjwbg" and create a new PV for same local volume storage STEP: Delete "local-pvqb6mv" and create a new PV for same local volume storage Jun 4 00:22:49.105: INFO: Deleting pod pod-fe2a7c27-850b-4cab-ace4-b4287a931378 Jun 4 00:22:49.112: INFO: Deleting PersistentVolumeClaim "pvc-j7v4r" Jun 4 00:22:49.116: INFO: Deleting PersistentVolumeClaim "pvc-jdzk2" Jun 4 00:22:49.120: INFO: Deleting PersistentVolumeClaim "pvc-lkbl9" Jun 4 00:22:49.124: INFO: 6/28 pods finished STEP: Delete "local-pvdr8r9" and create a new PV for same local volume storage STEP: Delete "local-pvx52ct" and create a new PV for same local volume storage STEP: Delete "local-pv7w6p2" and create a new PV for same local volume storage Jun 4 00:22:53.105: INFO: Deleting pod pod-47ac0852-ac50-444d-bbf3-88cdda40819e Jun 4 00:22:53.114: INFO: Deleting PersistentVolumeClaim "pvc-tgkp6" Jun 4 00:22:53.118: INFO: Deleting PersistentVolumeClaim "pvc-9gdv9" Jun 4 00:22:53.123: INFO: Deleting PersistentVolumeClaim "pvc-zcmth" Jun 4 00:22:53.126: INFO: 7/28 pods finished STEP: Delete "local-pvk5n6w" and create a new PV for same local volume storage STEP: Delete "local-pvrnmvm" and create a new PV for same local volume storage STEP: Delete "local-pvrpsfk" and create a new PV for same local volume storage Jun 4 00:22:54.104: INFO: Deleting pod pod-7bf9ac25-4999-48de-9011-231bcce90161 Jun 4 00:22:54.111: INFO: Deleting PersistentVolumeClaim "pvc-bdjpb" Jun 4 00:22:54.115: INFO: Deleting PersistentVolumeClaim "pvc-lhmhl" Jun 4 00:22:54.120: INFO: Deleting PersistentVolumeClaim "pvc-h55k9" Jun 4 00:22:54.124: INFO: 8/28 pods finished STEP: Delete "local-pvjs29c" and create a new PV for same local volume storage STEP: Delete "local-pvlk9w4" and create a new PV for same local volume storage STEP: Delete "local-pvqt6gb" and create a new PV for same local volume storage Jun 4 00:22:56.104: INFO: Deleting pod pod-fbde9980-74ab-4a90-9dac-6037bc2aa4dc Jun 4 00:22:56.111: INFO: Deleting PersistentVolumeClaim "pvc-v6rtf" Jun 4 00:22:56.115: INFO: Deleting PersistentVolumeClaim "pvc-nlsbl" Jun 4 00:22:56.121: INFO: Deleting PersistentVolumeClaim "pvc-tmxzm" Jun 4 00:22:56.125: INFO: 9/28 pods finished STEP: Delete "local-pvqvm56" and create a new PV for same local volume storage STEP: Delete "local-pv68fhf" and create a new PV for same local volume storage STEP: Delete "local-pvpqx77" and create a new PV for same local volume storage Jun 4 00:23:01.107: INFO: Deleting pod pod-a171fb68-011e-4347-8d79-74652eac322f Jun 4 00:23:01.114: INFO: Deleting PersistentVolumeClaim "pvc-tq7zn" Jun 4 00:23:01.118: INFO: Deleting PersistentVolumeClaim "pvc-w5rzp" Jun 4 00:23:01.121: INFO: Deleting PersistentVolumeClaim "pvc-qfj9k" Jun 4 00:23:01.126: INFO: 10/28 pods finished STEP: Delete "local-pvshghn" and create a new PV for same local volume storage STEP: Delete "local-pvm6vgz" and create a new PV for same local volume storage STEP: Delete "local-pvbtfdc" and create a new PV for same local volume storage Jun 4 00:23:02.102: INFO: Deleting pod pod-99d9601c-00ce-48dc-be61-82a37ed88339 Jun 4 00:23:02.112: INFO: Deleting PersistentVolumeClaim "pvc-l9vk9" Jun 4 00:23:02.117: INFO: Deleting PersistentVolumeClaim "pvc-nf88t" Jun 4 00:23:02.120: INFO: Deleting PersistentVolumeClaim "pvc-np9n2" Jun 4 00:23:02.124: INFO: 11/28 pods finished STEP: Delete "local-pv7sd86" and create a new PV for same local volume storage STEP: Delete "local-pv6tv75" and create a new PV for same local volume storage STEP: Delete "local-pvf4fgl" and create a new PV for same local volume storage Jun 4 00:23:04.104: INFO: Deleting pod pod-fc40eb67-967a-43c6-abf1-b3962df629eb Jun 4 00:23:04.112: INFO: Deleting PersistentVolumeClaim "pvc-kjlzl" Jun 4 00:23:04.117: INFO: Deleting PersistentVolumeClaim "pvc-qbfrh" Jun 4 00:23:04.122: INFO: Deleting PersistentVolumeClaim "pvc-f8qfn" Jun 4 00:23:04.127: INFO: 12/28 pods finished STEP: Delete "local-pvdsplb" and create a new PV for same local volume storage STEP: Delete "local-pvmgbq9" and create a new PV for same local volume storage STEP: Delete "local-pvpr84j" and create a new PV for same local volume storage Jun 4 00:23:06.104: INFO: Deleting pod pod-fde47544-237a-4d50-a011-8d3a83caaae4 Jun 4 00:23:06.111: INFO: Deleting PersistentVolumeClaim "pvc-bd8xm" Jun 4 00:23:06.115: INFO: Deleting PersistentVolumeClaim "pvc-q4m26" Jun 4 00:23:06.119: INFO: Deleting PersistentVolumeClaim "pvc-c77d6" Jun 4 00:23:06.122: INFO: 13/28 pods finished STEP: Delete "local-pvtvllj" and create a new PV for same local volume storage STEP: Delete "local-pvxpw79" and create a new PV for same local volume storage STEP: Delete "local-pvmfzqx" and create a new PV for same local volume storage Jun 4 00:23:07.104: INFO: Deleting pod pod-d5e60d97-6402-4049-9891-ebbd476ed149 Jun 4 00:23:07.111: INFO: Deleting PersistentVolumeClaim "pvc-frxtm" Jun 4 00:23:07.115: INFO: Deleting PersistentVolumeClaim "pvc-w59tz" Jun 4 00:23:07.118: INFO: Deleting PersistentVolumeClaim "pvc-z5t95" Jun 4 00:23:07.124: INFO: 14/28 pods finished Jun 4 00:23:07.124: INFO: Deleting pod pod-ff9bdefd-a9c7-4a06-bb48-0ce5107e72be Jun 4 00:23:07.130: INFO: Deleting PersistentVolumeClaim "pvc-zcj89" STEP: Delete "local-pv5zcgw" and create a new PV for same local volume storage Jun 4 00:23:07.134: INFO: Deleting PersistentVolumeClaim "pvc-654xz" Jun 4 00:23:07.138: INFO: Deleting PersistentVolumeClaim "pvc-sr56t" Jun 4 00:23:07.142: INFO: 15/28 pods finished STEP: Delete "local-pvk4nv5" and create a new PV for same local volume storage STEP: Delete "local-pvhcf5v" and create a new PV for same local volume storage STEP: Delete "local-pvhwsrq" and create a new PV for same local volume storage STEP: Delete "local-pvw55mc" and create a new PV for same local volume storage STEP: Delete "local-pv64grw" and create a new PV for same local volume storage Jun 4 00:23:10.105: INFO: Deleting pod pod-7eebbec5-97f1-46b2-9572-038191fc5da4 Jun 4 00:23:10.112: INFO: Deleting PersistentVolumeClaim "pvc-tzhh7" Jun 4 00:23:10.115: INFO: Deleting PersistentVolumeClaim "pvc-fpwfk" Jun 4 00:23:10.119: INFO: Deleting PersistentVolumeClaim "pvc-dq882" Jun 4 00:23:10.122: INFO: 16/28 pods finished STEP: Delete "local-pv6xsvb" and create a new PV for same local volume storage STEP: Delete "local-pv7x5gl" and create a new PV for same local volume storage STEP: Delete "local-pv896k6" and create a new PV for same local volume storage Jun 4 00:23:12.106: INFO: Deleting pod pod-ca52ff9a-fd7f-48a5-8129-267a81505d24 Jun 4 00:23:12.115: INFO: Deleting PersistentVolumeClaim "pvc-mr2v4" Jun 4 00:23:12.120: INFO: Deleting PersistentVolumeClaim "pvc-v6682" Jun 4 00:23:12.123: INFO: Deleting PersistentVolumeClaim "pvc-ds9fs" Jun 4 00:23:12.127: INFO: 17/28 pods finished STEP: Delete "local-pv8cdpc" and create a new PV for same local volume storage STEP: Delete "local-pvgkdw7" and create a new PV for same local volume storage STEP: Delete "local-pv9d48l" and create a new PV for same local volume storage Jun 4 00:23:13.106: INFO: Deleting pod pod-b973e900-5d9e-4178-8a4e-a8fd209485a2 Jun 4 00:23:13.114: INFO: Deleting PersistentVolumeClaim "pvc-flr55" Jun 4 00:23:13.118: INFO: Deleting PersistentVolumeClaim "pvc-cvp9t" Jun 4 00:23:13.122: INFO: Deleting PersistentVolumeClaim "pvc-wrgww" Jun 4 00:23:13.125: INFO: 18/28 pods finished STEP: Delete "local-pvjbn7t" and create a new PV for same local volume storage STEP: Delete "local-pvvw8v5" and create a new PV for same local volume storage STEP: Delete "local-pvdh84j" and create a new PV for same local volume storage Jun 4 00:23:14.105: INFO: Deleting pod pod-1e08c65f-1dbb-419b-85ca-dda4ad9a6086 Jun 4 00:23:14.112: INFO: Deleting PersistentVolumeClaim "pvc-z6r5h" Jun 4 00:23:14.116: INFO: Deleting PersistentVolumeClaim "pvc-x96hp" Jun 4 00:23:14.120: INFO: Deleting PersistentVolumeClaim "pvc-nv9zn" Jun 4 00:23:14.124: INFO: 19/28 pods finished STEP: Delete "local-pvsvrrv" and create a new PV for same local volume storage STEP: Delete "local-pvn6v5d" and create a new PV for same local volume storage STEP: Delete "local-pvp7htp" and create a new PV for same local volume storage Jun 4 00:23:18.106: INFO: Deleting pod pod-58bb7c71-8ba6-4628-9510-f26d47a3eca0 Jun 4 00:23:18.113: INFO: Deleting PersistentVolumeClaim "pvc-8p2sg" Jun 4 00:23:18.116: INFO: Deleting PersistentVolumeClaim "pvc-5gpfb" Jun 4 00:23:18.120: INFO: Deleting PersistentVolumeClaim "pvc-gf9wb" Jun 4 00:23:18.124: INFO: 20/28 pods finished STEP: Delete "local-pvjhcxw" and create a new PV for same local volume storage STEP: Delete "local-pvgx6pt" and create a new PV for same local volume storage STEP: Delete "local-pvh6mf7" and create a new PV for same local volume storage Jun 4 00:23:20.104: INFO: Deleting pod pod-1bd9821d-8d53-48c7-a0bd-bb6bf9d8fda1 Jun 4 00:23:20.114: INFO: Deleting PersistentVolumeClaim "pvc-9kwsr" Jun 4 00:23:20.118: INFO: Deleting PersistentVolumeClaim "pvc-55l2k" Jun 4 00:23:20.122: INFO: Deleting PersistentVolumeClaim "pvc-zct9x" Jun 4 00:23:20.125: INFO: 21/28 pods finished STEP: Delete "local-pvbjp67" and create a new PV for same local volume storage STEP: Delete "local-pvbdm5h" and create a new PV for same local volume storage STEP: Delete "local-pvbffps" and create a new PV for same local volume storage Jun 4 00:23:22.107: INFO: Deleting pod pod-432ebaae-c053-4a77-8639-60cb8e33dde7 Jun 4 00:23:22.114: INFO: Deleting PersistentVolumeClaim "pvc-lst8n" Jun 4 00:23:22.118: INFO: Deleting PersistentVolumeClaim "pvc-68mcj" Jun 4 00:23:22.122: INFO: Deleting PersistentVolumeClaim "pvc-nrzgm" Jun 4 00:23:22.125: INFO: 22/28 pods finished STEP: Delete "local-pvsgdrw" and create a new PV for same local volume storage STEP: Delete "local-pvvqk9h" and create a new PV for same local volume storage STEP: Delete "local-pvk4fq5" and create a new PV for same local volume storage Jun 4 00:23:23.110: INFO: Deleting pod pod-290fad11-00bb-476a-bbd3-890b6f3d4ebc Jun 4 00:23:23.116: INFO: Deleting PersistentVolumeClaim "pvc-4dzm2" Jun 4 00:23:23.120: INFO: Deleting PersistentVolumeClaim "pvc-zgb5j" Jun 4 00:23:23.124: INFO: Deleting PersistentVolumeClaim "pvc-6j9v4" Jun 4 00:23:23.128: INFO: 23/28 pods finished STEP: Delete "local-pvhf2dv" and create a new PV for same local volume storage STEP: Delete "local-pvb6dzt" and create a new PV for same local volume storage STEP: Delete "local-pv5fmd6" and create a new PV for same local volume storage Jun 4 00:23:25.105: INFO: Deleting pod pod-7d6781ee-55d1-48aa-b108-45b12b289be8 Jun 4 00:23:25.113: INFO: Deleting PersistentVolumeClaim "pvc-64g6s" Jun 4 00:23:25.116: INFO: Deleting PersistentVolumeClaim "pvc-zk48b" Jun 4 00:23:25.120: INFO: Deleting PersistentVolumeClaim "pvc-crp4b" Jun 4 00:23:25.124: INFO: 24/28 pods finished Jun 4 00:23:25.124: INFO: Deleting pod pod-ab54c256-0242-4248-9556-391a8dd76663 Jun 4 00:23:25.129: INFO: Deleting PersistentVolumeClaim "pvc-t69g7" STEP: Delete "local-pv8bqf9" and create a new PV for same local volume storage Jun 4 00:23:25.134: INFO: Deleting PersistentVolumeClaim "pvc-655mh" Jun 4 00:23:25.139: INFO: Deleting PersistentVolumeClaim "pvc-9zdls" STEP: Delete "local-pvxw898" and create a new PV for same local volume storage Jun 4 00:23:25.143: INFO: 25/28 pods finished STEP: Delete "local-pvr8zrp" and create a new PV for same local volume storage STEP: Delete "local-pvsw6jn" and create a new PV for same local volume storage STEP: Delete "local-pvzc5tv" and create a new PV for same local volume storage STEP: Delete "local-pv4djpp" and create a new PV for same local volume storage Jun 4 00:23:29.104: INFO: Deleting pod pod-62063da9-075f-4e48-80c3-9da52307204c Jun 4 00:23:29.111: INFO: Deleting PersistentVolumeClaim "pvc-xdkch" Jun 4 00:23:29.115: INFO: Deleting PersistentVolumeClaim "pvc-zsknk" Jun 4 00:23:29.119: INFO: Deleting PersistentVolumeClaim "pvc-9pq9x" Jun 4 00:23:29.123: INFO: 26/28 pods finished STEP: Delete "local-pv2gk8b" and create a new PV for same local volume storage STEP: Delete "local-pvd8pt2" and create a new PV for same local volume storage STEP: Delete "local-pvfcwq4" and create a new PV for same local volume storage Jun 4 00:23:30.103: INFO: Deleting pod pod-6a3d510b-01eb-4b41-9987-93c87b6be144 Jun 4 00:23:30.112: INFO: Deleting PersistentVolumeClaim "pvc-ksndf" Jun 4 00:23:30.116: INFO: Deleting PersistentVolumeClaim "pvc-jbn2d" Jun 4 00:23:30.120: INFO: Deleting PersistentVolumeClaim "pvc-gzn9g" Jun 4 00:23:30.123: INFO: 27/28 pods finished STEP: Delete "local-pvr6lfw" and create a new PV for same local volume storage STEP: Delete "local-pvc8shd" and create a new PV for same local volume storage STEP: Delete "local-pvqmkdp" and create a new PV for same local volume storage Jun 4 00:23:31.104: INFO: Deleting pod pod-63722b69-ea1f-4988-b8be-5feb2e0ac8d0 Jun 4 00:23:31.111: INFO: Deleting PersistentVolumeClaim "pvc-8blmp" Jun 4 00:23:31.115: INFO: Deleting PersistentVolumeClaim "pvc-7mxvf" Jun 4 00:23:31.119: INFO: Deleting PersistentVolumeClaim "pvc-gs49g" Jun 4 00:23:31.122: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Jun 4 00:23:31.122: INFO: pvc is nil Jun 4 00:23:31.122: INFO: Deleting PersistentVolume "local-pvwjvjz" STEP: Cleaning up PVC and PV Jun 4 00:23:31.126: INFO: pvc is nil Jun 4 00:23:31.126: INFO: Deleting PersistentVolume "local-pvtct5b" STEP: Cleaning up PVC and PV Jun 4 00:23:31.129: INFO: pvc is nil Jun 4 00:23:31.129: INFO: Deleting PersistentVolume "local-pv6k7r8" STEP: Cleaning up PVC and PV Jun 4 00:23:31.133: INFO: pvc is nil Jun 4 00:23:31.133: INFO: Deleting PersistentVolume "local-pvqv9vd" STEP: Cleaning up PVC and PV Jun 4 00:23:31.137: INFO: pvc is nil Jun 4 00:23:31.137: INFO: Deleting PersistentVolume "local-pv9bw24" STEP: Cleaning up PVC and PV Jun 4 00:23:31.140: INFO: pvc is nil Jun 4 00:23:31.140: INFO: Deleting PersistentVolume "local-pv5ncsp" STEP: Cleaning up PVC and PV Jun 4 00:23:31.145: INFO: pvc is nil Jun 4 00:23:31.145: INFO: Deleting PersistentVolume "local-pvzklrc" STEP: Cleaning up PVC and PV Jun 4 00:23:31.148: INFO: pvc is nil Jun 4 00:23:31.148: INFO: Deleting PersistentVolume "local-pv4xr7r" STEP: Cleaning up PVC and PV Jun 4 00:23:31.152: INFO: pvc is nil Jun 4 00:23:31.152: INFO: Deleting PersistentVolume "local-pvdnqv6" STEP: Cleaning up PVC and PV Jun 4 00:23:31.155: INFO: pvc is nil Jun 4 00:23:31.155: INFO: Deleting PersistentVolume "local-pvk4csf" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311" Jun 4 00:23:31.159: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:31.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8bedb7c2-ecdb-48a0-b92f-efbb7330f311] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b" Jun 4 00:23:31.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:31.525: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8c06cfc1-104a-4f66-830f-09a69b81018b] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415" Jun 4 00:23:31.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:31.710: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-00322413-543b-4fbf-ab9b-a6cee35e2415] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95" Jun 4 00:23:31.797: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:31.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad2c2fc5-b9eb-406e-88c0-3b504a640f95] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46" Jun 4 00:23:32.124: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.220: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f280dd28-3d7b-4a87-a91b-9c01744ead46] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae" Jun 4 00:23:32.300: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.411: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c28999d2-60d1-4de2-b9e1-7082a7a686ae] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c" Jun 4 00:23:32.501: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.589: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c6288036-5ba7-4682-9ea7-ca4ace66667c] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8" Jun 4 00:23:32.673: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.769: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a1f32036-5ac9-4183-8ea1-b676e2a84dd8] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c" Jun 4 00:23:32.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:32.947: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e4327f04-e248-4bf5-bf3b-1e69b62d535c] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:32.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345" Jun 4 00:23:33.028: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:33.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b6a24c56-830a-4893-bf1e-fe1dff335345] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node2-8qplb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Jun 4 00:23:33.217: INFO: pvc is nil Jun 4 00:23:33.217: INFO: Deleting PersistentVolume "local-pv8m5c6" STEP: Cleaning up PVC and PV Jun 4 00:23:33.222: INFO: pvc is nil Jun 4 00:23:33.222: INFO: Deleting PersistentVolume "local-pv6gd7k" STEP: Cleaning up PVC and PV Jun 4 00:23:33.226: INFO: pvc is nil Jun 4 00:23:33.226: INFO: Deleting PersistentVolume "local-pvrcltz" STEP: Cleaning up PVC and PV Jun 4 00:23:33.232: INFO: pvc is nil Jun 4 00:23:33.232: INFO: Deleting PersistentVolume "local-pvdb2mt" STEP: Cleaning up PVC and PV Jun 4 00:23:33.236: INFO: pvc is nil Jun 4 00:23:33.236: INFO: Deleting PersistentVolume "local-pvnwjgb" STEP: Cleaning up PVC and PV Jun 4 00:23:33.240: INFO: pvc is nil Jun 4 00:23:33.240: INFO: Deleting PersistentVolume "local-pv4svct" STEP: Cleaning up PVC and PV Jun 4 00:23:33.244: INFO: pvc is nil Jun 4 00:23:33.244: INFO: Deleting PersistentVolume "local-pvc82qc" STEP: Cleaning up PVC and PV Jun 4 00:23:33.247: INFO: pvc is nil Jun 4 00:23:33.247: INFO: Deleting PersistentVolume "local-pvzxg4r" STEP: Cleaning up PVC and PV Jun 4 00:23:33.251: INFO: pvc is nil Jun 4 00:23:33.251: INFO: Deleting PersistentVolume "local-pvdk4cb" STEP: Cleaning up PVC and PV Jun 4 00:23:33.255: INFO: pvc is nil Jun 4 00:23:33.255: INFO: Deleting PersistentVolume "local-pvflds2" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c" Jun 4 00:23:33.259: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:33.362: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d86540e8-330f-4c55-8863-c45b1a39ec0c] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e" Jun 4 00:23:33.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:33.573: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f1cc165b-97f0-455d-aa85-696dd616df8e] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643" Jun 4 00:23:33.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:33.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d62907ae-d0bc-4776-802f-238582316643] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4" Jun 4 00:23:33.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:33.929: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c7797ca3-b6be-47cb-9b99-d377ce493fd4] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:33.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e" Jun 4 00:23:34.024: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:34.133: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-306d94ee-9fd0-4e53-b803-8ca624bf0c5e] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a" Jun 4 00:23:34.214: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:34.307: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eaef13ce-e97b-4ec0-a6b3-09bd38563e1a] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add" Jun 4 00:23:34.387: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:34.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c8dcd478-dc90-4fe6-aba7-ae3ff7460add] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a" Jun 4 00:23:34.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:34.688: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0175a6bf-98b0-4dae-940f-1ec49c173c6a] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07" Jun 4 00:23:34.775: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:34.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-791d454b-cb2f-4ad1-9e59-a754898cbd07] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7" Jun 4 00:23:34.948: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7"] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:34.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 4 00:23:35.054: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-98c094d6-91a6-481a-b35e-8b0b0e208cb7] Namespace:persistent-local-volumes-test-8873 PodName:hostexec-node1-6nwfk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:35.054: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:35.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8873" for this suite. • [SLOW TEST:63.171 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":21,"completed":1,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:35.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:35.187: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:35.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-912" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:35.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:35.229: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:35.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3915" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:35.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:35.263: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:35.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6684" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:35.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:23:37.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4566 PodName:hostexec-node2-2gd4f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:37.330: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:23:37.409: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:23:37.409: INFO: exec node2: stdout: "0\n" Jun 4 00:23:37.409: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:23:37.409: INFO: exec node2: exit code: 0 Jun 4 00:23:37.409: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:37.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4566" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.139 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:37.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:37.444: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:37.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8057" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:37.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:23:39.512: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9497 PodName:hostexec-node1-svgh7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:39.512: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:23:39.596: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:23:39.596: INFO: exec node1: stdout: "0\n" Jun 4 00:23:39.596: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:23:39.596: INFO: exec node1: exit code: 0 Jun 4 00:23:39.596: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:39.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9497" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.155 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:39.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:39.638: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:39.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5198" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:39.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:39.670: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:39.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-859" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:39.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:39.705: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:39.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7517" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:39.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:23:39.741: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:39.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5013" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:39.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:23:43.805: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3507 PodName:hostexec-node1-4t8xw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:23:43.805: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:23:43.891: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:23:43.891: INFO: exec node1: stdout: "0\n" Jun 4 00:23:43.891: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:23:43.891: INFO: exec node1: exit code: 0 Jun 4 00:23:43.891: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:23:43.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3507" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.146 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:23:43.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvp7rk9 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:25:14.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5014" for this suite. • [SLOW TEST:90.551 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":21,"completed":2,"skipped":5179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:25:14.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:25:14.491: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:25:14.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1593" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:25:14.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 4 00:25:14.540: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:25:14.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5093" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 4 00:25:14.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 4 00:25:18.604: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8872 PodName:hostexec-node2-xgf84 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 4 00:25:18.604: INFO: >>> kubeConfig: /root/.kube/config Jun 4 00:25:18.696: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 4 00:25:18.696: INFO: exec node2: stdout: "0\n" Jun 4 00:25:18.696: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 4 00:25:18.696: INFO: exec node2: exit code: 0 Jun 4 00:25:18.696: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 4 00:25:18.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8872" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.156 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 4 00:25:18.708: INFO: Running AfterSuite actions on all nodes Jun 4 00:25:18.708: INFO: Running AfterSuite actions on node 1 Jun 4 00:25:18.708: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":21,"completed":2,"skipped":5771,"failed":0} Ran 2 of 5773 Specs in 177.450 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5771 Skipped PASS Ginkgo ran 1 suite in 2m58.865542627s Test Suite Passed