I0812 10:47:03.976133 6 e2e.go:224] Starting e2e run "2864296a-dc89-11ea-9b9c-0242ac11000c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597229223 - Will randomize all specs Will run 201 of 2164 specs Aug 12 10:47:04.141: INFO: >>> kubeConfig: /root/.kube/config Aug 12 10:47:04.143: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 12 10:47:04.155: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 12 10:47:04.217: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 12 10:47:04.217: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 12 10:47:04.217: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 12 10:47:04.227: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 12 10:47:04.227: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 12 10:47:04.227: INFO: e2e test version: v1.13.12 Aug 12 10:47:04.228: INFO: kube-apiserver version: v1.13.12 SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:47:04.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Aug 12 10:47:05.392: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 10:47:05.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-fzcnx" to be "success or failure" Aug 12 10:47:05.404: INFO: Pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.773249ms Aug 12 10:47:07.479: INFO: Pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080156123s Aug 12 10:47:09.599: INFO: Pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200838304s Aug 12 10:47:12.057: INFO: Pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.658471912s STEP: Saw pod success Aug 12 10:47:12.057: INFO: Pod "downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:47:12.060: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 10:47:12.243: INFO: Waiting for pod downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c to disappear Aug 12 10:47:12.402: INFO: Pod downwardapi-volume-298cb79e-dc89-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:47:12.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fzcnx" for this suite. Aug 12 10:47:20.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:47:20.770: INFO: namespace: e2e-tests-downward-api-fzcnx, resource: bindings, ignored listing per whitelist Aug 12 10:47:20.795: INFO: namespace e2e-tests-downward-api-fzcnx deletion completed in 8.388702662s • [SLOW TEST:16.567 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:47:20.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:48:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-tnqvs" for this suite. Aug 12 10:48:12.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:48:12.684: INFO: namespace: e2e-tests-container-runtime-tnqvs, resource: bindings, ignored listing per whitelist Aug 12 10:48:12.708: INFO: namespace e2e-tests-container-runtime-tnqvs deletion completed in 6.247621182s • [SLOW TEST:51.913 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:48:12.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 12 10:48:19.363: INFO: Successfully updated pod "annotationupdate51ba5e52-dc89-11ea-9b9c-0242ac11000c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:48:23.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d8948" for this suite. Aug 12 10:48:45.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:48:45.629: INFO: namespace: e2e-tests-projected-d8948, resource: bindings, ignored listing per whitelist Aug 12 10:48:45.640: INFO: namespace e2e-tests-projected-d8948 deletion completed in 22.229866084s • [SLOW TEST:32.932 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:48:45.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mk8gt Aug 12 10:48:51.889: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mk8gt STEP: checking the pod's current state and verifying that restartCount is present Aug 12 10:48:51.891: INFO: Initial restart count of pod liveness-http is 0 Aug 12 10:49:14.127: INFO: Restart count of pod e2e-tests-container-probe-mk8gt/liveness-http is now 1 (22.236145107s elapsed) Aug 12 10:49:36.653: INFO: Restart count of pod e2e-tests-container-probe-mk8gt/liveness-http is now 2 (44.76209723s elapsed) Aug 12 10:49:57.290: INFO: Restart count of pod e2e-tests-container-probe-mk8gt/liveness-http is now 3 (1m5.399422915s elapsed) Aug 12 10:50:13.471: INFO: Restart count of pod e2e-tests-container-probe-mk8gt/liveness-http is now 4 (1m21.579923163s elapsed) Aug 12 10:51:13.945: INFO: Restart count of pod e2e-tests-container-probe-mk8gt/liveness-http is now 5 (2m22.053698736s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:51:14.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mk8gt" for this suite. Aug 12 10:51:20.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:51:20.304: INFO: namespace: e2e-tests-container-probe-mk8gt, resource: bindings, ignored listing per whitelist Aug 12 10:51:20.329: INFO: namespace e2e-tests-container-probe-mk8gt deletion completed in 6.110898258s • [SLOW TEST:154.689 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:51:20.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 12 10:51:30.536: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 12 10:51:30.579: INFO: Pod pod-with-prestop-http-hook still exists Aug 12 10:51:32.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 12 10:51:32.583: INFO: Pod pod-with-prestop-http-hook still exists Aug 12 10:51:34.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 12 10:51:34.584: INFO: Pod pod-with-prestop-http-hook still exists Aug 12 10:51:36.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 12 10:51:36.584: INFO: Pod pod-with-prestop-http-hook still exists Aug 12 10:51:38.580: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 12 10:51:38.584: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:51:38.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-h7x87" for this suite. Aug 12 10:52:00.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:52:00.687: INFO: namespace: e2e-tests-container-lifecycle-hook-h7x87, resource: bindings, ignored listing per whitelist Aug 12 10:52:00.693: INFO: namespace e2e-tests-container-lifecycle-hook-h7x87 deletion completed in 22.098498333s • [SLOW TEST:40.363 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:52:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 10:52:01.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-cb5mc" to be "success or failure" Aug 12 10:52:01.179: INFO: Pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 91.50422ms Aug 12 10:52:03.449: INFO: Pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361853844s Aug 12 10:52:05.453: INFO: Pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.3663026s Aug 12 10:52:07.457: INFO: Pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.36985245s STEP: Saw pod success Aug 12 10:52:07.457: INFO: Pod "downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:52:07.460: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 10:52:07.487: INFO: Waiting for pod downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c to disappear Aug 12 10:52:07.521: INFO: Pod downwardapi-volume-d9c93e51-dc89-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:52:07.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cb5mc" for this suite. Aug 12 10:52:13.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:52:13.585: INFO: namespace: e2e-tests-downward-api-cb5mc, resource: bindings, ignored listing per whitelist Aug 12 10:52:13.602: INFO: namespace e2e-tests-downward-api-cb5mc deletion completed in 6.07548724s • [SLOW TEST:12.909 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:52:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-e157d200-dc89-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-e157d200-dc89-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:52:21.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-694lw" for this suite. Aug 12 10:52:43.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:52:43.900: INFO: namespace: e2e-tests-configmap-694lw, resource: bindings, ignored listing per whitelist Aug 12 10:52:43.964: INFO: namespace e2e-tests-configmap-694lw deletion completed in 22.114249694s • [SLOW TEST:30.362 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:52:43.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f370fa52-dc89-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 10:52:44.188: INFO: Waiting up to 5m0s for pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-xnxzx" to be "success or failure" Aug 12 10:52:44.220: INFO: Pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.270322ms Aug 12 10:52:46.384: INFO: Pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19658527s Aug 12 10:52:48.389: INFO: Pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.200768532s Aug 12 10:52:50.392: INFO: Pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.204169209s STEP: Saw pod success Aug 12 10:52:50.392: INFO: Pod "pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:52:50.395: INFO: Trying to get logs from node hunter-worker pod pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 10:52:50.458: INFO: Waiting for pod pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c to disappear Aug 12 10:52:50.464: INFO: Pod pod-secrets-f3763bd4-dc89-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:52:50.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xnxzx" for this suite. Aug 12 10:52:56.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:52:56.544: INFO: namespace: e2e-tests-secrets-xnxzx, resource: bindings, ignored listing per whitelist Aug 12 10:52:56.596: INFO: namespace e2e-tests-secrets-xnxzx deletion completed in 6.127865906s • [SLOW TEST:12.631 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:52:56.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Aug 12 10:52:56.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-np6fh' Aug 12 10:52:59.234: INFO: stderr: "" Aug 12 10:52:59.234: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Aug 12 10:53:00.239: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:00.239: INFO: Found 0 / 1 Aug 12 10:53:01.474: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:01.474: INFO: Found 0 / 1 Aug 12 10:53:02.238: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:02.238: INFO: Found 0 / 1 Aug 12 10:53:03.258: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:03.258: INFO: Found 0 / 1 Aug 12 10:53:04.239: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:04.239: INFO: Found 1 / 1 Aug 12 10:53:04.239: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 12 10:53:04.243: INFO: Selector matched 1 pods for map[app:redis] Aug 12 10:53:04.243: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Aug 12 10:53:04.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh' Aug 12 10:53:04.356: INFO: stderr: "" Aug 12 10:53:04.356: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 Aug 10:53:03.032 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Aug 10:53:03.035 # Server started, Redis version 3.2.12\n1:M 12 Aug 10:53:03.035 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Aug 10:53:03.035 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Aug 12 10:53:04.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh --tail=1' Aug 12 10:53:04.469: INFO: stderr: "" Aug 12 10:53:04.469: INFO: stdout: "1:M 12 Aug 10:53:03.035 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Aug 12 10:53:04.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh --limit-bytes=1' Aug 12 10:53:04.590: INFO: stderr: "" Aug 12 10:53:04.590: INFO: stdout: " " STEP: exposing timestamps Aug 12 10:53:04.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh --tail=1 --timestamps' Aug 12 10:53:04.720: INFO: stderr: "" Aug 12 10:53:04.720: INFO: stdout: "2020-08-12T10:53:03.035647192Z 1:M 12 Aug 10:53:03.035 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Aug 12 10:53:07.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh --since=1s' Aug 12 10:53:07.332: INFO: stderr: "" Aug 12 10:53:07.332: INFO: stdout: "" Aug 12 10:53:07.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-84576 redis-master --namespace=e2e-tests-kubectl-np6fh --since=24h' Aug 12 10:53:07.452: INFO: stderr: "" Aug 12 10:53:07.452: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 Aug 10:53:03.032 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Aug 10:53:03.035 # Server started, Redis version 3.2.12\n1:M 12 Aug 10:53:03.035 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Aug 10:53:03.035 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Aug 12 10:53:07.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-np6fh' Aug 12 10:53:07.597: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 10:53:07.597: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Aug 12 10:53:07.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-np6fh' Aug 12 10:53:07.702: INFO: stderr: "No resources found.\n" Aug 12 10:53:07.702: INFO: stdout: "" Aug 12 10:53:07.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-np6fh -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 12 10:53:07.802: INFO: stderr: "" Aug 12 10:53:07.802: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:53:07.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-np6fh" for this suite. Aug 12 10:53:30.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:53:30.078: INFO: namespace: e2e-tests-kubectl-np6fh, resource: bindings, ignored listing per whitelist Aug 12 10:53:30.128: INFO: namespace e2e-tests-kubectl-np6fh deletion completed in 22.322709158s • [SLOW TEST:33.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:53:30.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-0ee98cd7-dc8a-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 10:53:30.220: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-tct9l" to be "success or failure" Aug 12 10:53:30.251: INFO: Pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.289011ms Aug 12 10:53:32.462: INFO: Pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241936947s Aug 12 10:53:34.465: INFO: Pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.245268655s Aug 12 10:53:36.471: INFO: Pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250301471s STEP: Saw pod success Aug 12 10:53:36.471: INFO: Pod "pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:53:36.474: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 12 10:53:36.512: INFO: Waiting for pod pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:53:36.524: INFO: Pod pod-projected-configmaps-0eeb84fb-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:53:36.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tct9l" for this suite. Aug 12 10:53:42.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:53:42.547: INFO: namespace: e2e-tests-projected-tct9l, resource: bindings, ignored listing per whitelist Aug 12 10:53:42.612: INFO: namespace e2e-tests-projected-tct9l deletion completed in 6.083822658s • [SLOW TEST:12.483 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:53:42.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Aug 12 10:53:42.767: INFO: Waiting up to 5m0s for pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-containers-vmwpc" to be "success or failure" Aug 12 10:53:42.771: INFO: Pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709999ms Aug 12 10:53:44.893: INFO: Pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125869893s Aug 12 10:53:46.916: INFO: Pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148315825s Aug 12 10:53:48.921: INFO: Pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153057038s STEP: Saw pod success Aug 12 10:53:48.921: INFO: Pod "client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:53:48.924: INFO: Trying to get logs from node hunter-worker pod client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 10:53:48.951: INFO: Waiting for pod client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:53:48.962: INFO: Pod client-containers-16621e1b-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:53:48.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-vmwpc" for this suite. Aug 12 10:53:54.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:53:55.031: INFO: namespace: e2e-tests-containers-vmwpc, resource: bindings, ignored listing per whitelist Aug 12 10:53:55.052: INFO: namespace e2e-tests-containers-vmwpc deletion completed in 6.085464243s • [SLOW TEST:12.440 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:53:55.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 12 10:54:02.215: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:54:03.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-rxd4d" for this suite. Aug 12 10:54:27.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:54:27.340: INFO: namespace: e2e-tests-replicaset-rxd4d, resource: bindings, ignored listing per whitelist Aug 12 10:54:27.395: INFO: namespace e2e-tests-replicaset-rxd4d deletion completed in 24.1416746s • [SLOW TEST:32.343 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:54:27.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dxvgc [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dxvgc STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dxvgc Aug 12 10:54:27.567: INFO: Found 0 stateful pods, waiting for 1 Aug 12 10:54:37.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 12 10:54:37.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 10:54:37.831: INFO: stderr: "I0812 10:54:37.716060 259 log.go:172] (0xc000768210) (0xc0008a0640) Create stream\nI0812 10:54:37.716133 259 log.go:172] (0xc000768210) (0xc0008a0640) Stream added, broadcasting: 1\nI0812 10:54:37.718423 259 log.go:172] (0xc000768210) Reply frame received for 1\nI0812 10:54:37.718468 259 log.go:172] (0xc000768210) (0xc000520be0) Create stream\nI0812 10:54:37.718480 259 log.go:172] (0xc000768210) (0xc000520be0) Stream added, broadcasting: 3\nI0812 10:54:37.719290 259 log.go:172] (0xc000768210) Reply frame received for 3\nI0812 10:54:37.719312 259 log.go:172] (0xc000768210) (0xc0008a06e0) Create stream\nI0812 10:54:37.719320 259 log.go:172] (0xc000768210) (0xc0008a06e0) Stream added, broadcasting: 5\nI0812 10:54:37.719961 259 log.go:172] (0xc000768210) Reply frame received for 5\nI0812 10:54:37.822158 259 log.go:172] (0xc000768210) Data frame received for 5\nI0812 10:54:37.822192 259 log.go:172] (0xc0008a06e0) (5) Data frame handling\nI0812 10:54:37.822221 259 log.go:172] (0xc000768210) Data frame received for 3\nI0812 10:54:37.822232 259 log.go:172] (0xc000520be0) (3) Data frame handling\nI0812 10:54:37.822245 259 log.go:172] (0xc000520be0) (3) Data frame sent\nI0812 10:54:37.822260 259 log.go:172] (0xc000768210) Data frame received for 3\nI0812 10:54:37.822276 259 log.go:172] (0xc000520be0) (3) Data frame handling\nI0812 10:54:37.824314 259 log.go:172] (0xc000768210) Data frame received for 1\nI0812 10:54:37.824354 259 log.go:172] (0xc0008a0640) (1) Data frame handling\nI0812 10:54:37.824380 259 log.go:172] (0xc0008a0640) (1) Data frame sent\nI0812 10:54:37.824395 259 log.go:172] (0xc000768210) (0xc0008a0640) Stream removed, broadcasting: 1\nI0812 10:54:37.824550 259 log.go:172] (0xc000768210) Go away received\nI0812 10:54:37.824622 259 log.go:172] (0xc000768210) (0xc0008a0640) Stream removed, broadcasting: 1\nI0812 10:54:37.824651 259 log.go:172] (0xc000768210) (0xc000520be0) Stream removed, broadcasting: 3\nI0812 10:54:37.824685 259 log.go:172] (0xc000768210) (0xc0008a06e0) Stream removed, broadcasting: 5\n" Aug 12 10:54:37.831: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 10:54:37.831: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 10:54:37.835: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 12 10:54:48.073: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 12 10:54:48.073: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 10:54:48.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999824s Aug 12 10:54:49.522: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.605904584s Aug 12 10:54:50.525: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.601129681s Aug 12 10:54:51.589: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.597949015s Aug 12 10:54:52.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.533967029s Aug 12 10:54:53.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.516534059s Aug 12 10:54:54.817: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.310513066s Aug 12 10:54:55.822: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.306197047s Aug 12 10:54:56.826: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.300864134s Aug 12 10:54:57.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 297.290661ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dxvgc Aug 12 10:54:58.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 10:54:59.170: INFO: stderr: "I0812 10:54:59.106684 282 log.go:172] (0xc0005fa370) (0xc00074c640) Create stream\nI0812 10:54:59.106725 282 log.go:172] (0xc0005fa370) (0xc00074c640) Stream added, broadcasting: 1\nI0812 10:54:59.108524 282 log.go:172] (0xc0005fa370) Reply frame received for 1\nI0812 10:54:59.108557 282 log.go:172] (0xc0005fa370) (0xc00057ad20) Create stream\nI0812 10:54:59.108574 282 log.go:172] (0xc0005fa370) (0xc00057ad20) Stream added, broadcasting: 3\nI0812 10:54:59.109550 282 log.go:172] (0xc0005fa370) Reply frame received for 3\nI0812 10:54:59.109572 282 log.go:172] (0xc0005fa370) (0xc00074c6e0) Create stream\nI0812 10:54:59.109578 282 log.go:172] (0xc0005fa370) (0xc00074c6e0) Stream added, broadcasting: 5\nI0812 10:54:59.110302 282 log.go:172] (0xc0005fa370) Reply frame received for 5\nI0812 10:54:59.161353 282 log.go:172] (0xc0005fa370) Data frame received for 3\nI0812 10:54:59.161388 282 log.go:172] (0xc00057ad20) (3) Data frame handling\nI0812 10:54:59.161395 282 log.go:172] (0xc00057ad20) (3) Data frame sent\nI0812 10:54:59.161412 282 log.go:172] (0xc0005fa370) Data frame received for 5\nI0812 10:54:59.161443 282 log.go:172] (0xc00074c6e0) (5) Data frame handling\nI0812 10:54:59.161466 282 log.go:172] (0xc0005fa370) Data frame received for 3\nI0812 10:54:59.161480 282 log.go:172] (0xc00057ad20) (3) Data frame handling\nI0812 10:54:59.162770 282 log.go:172] (0xc0005fa370) Data frame received for 1\nI0812 10:54:59.162783 282 log.go:172] (0xc00074c640) (1) Data frame handling\nI0812 10:54:59.162788 282 log.go:172] (0xc00074c640) (1) Data frame sent\nI0812 10:54:59.162799 282 log.go:172] (0xc0005fa370) (0xc00074c640) Stream removed, broadcasting: 1\nI0812 10:54:59.162816 282 log.go:172] (0xc0005fa370) Go away received\nI0812 10:54:59.162996 282 log.go:172] (0xc0005fa370) (0xc00074c640) Stream removed, broadcasting: 1\nI0812 10:54:59.163008 282 log.go:172] (0xc0005fa370) (0xc00057ad20) Stream removed, broadcasting: 3\nI0812 10:54:59.163012 282 log.go:172] (0xc0005fa370) (0xc00074c6e0) Stream removed, broadcasting: 5\n" Aug 12 10:54:59.170: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 10:54:59.170: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 10:54:59.173: INFO: Found 1 stateful pods, waiting for 3 Aug 12 10:55:09.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 10:55:09.218: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 10:55:09.218: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 12 10:55:19.482: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 10:55:19.482: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 10:55:19.482: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 12 10:55:19.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 10:55:20.157: INFO: stderr: "I0812 10:55:20.032949 305 log.go:172] (0xc000130840) (0xc000822640) Create stream\nI0812 10:55:20.033005 305 log.go:172] (0xc000130840) (0xc000822640) Stream added, broadcasting: 1\nI0812 10:55:20.035308 305 log.go:172] (0xc000130840) Reply frame received for 1\nI0812 10:55:20.035356 305 log.go:172] (0xc000130840) (0xc0008226e0) Create stream\nI0812 10:55:20.035367 305 log.go:172] (0xc000130840) (0xc0008226e0) Stream added, broadcasting: 3\nI0812 10:55:20.036442 305 log.go:172] (0xc000130840) Reply frame received for 3\nI0812 10:55:20.036479 305 log.go:172] (0xc000130840) (0xc0000ead20) Create stream\nI0812 10:55:20.036493 305 log.go:172] (0xc000130840) (0xc0000ead20) Stream added, broadcasting: 5\nI0812 10:55:20.037703 305 log.go:172] (0xc000130840) Reply frame received for 5\nI0812 10:55:20.148387 305 log.go:172] (0xc000130840) Data frame received for 3\nI0812 10:55:20.148416 305 log.go:172] (0xc0008226e0) (3) Data frame handling\nI0812 10:55:20.148429 305 log.go:172] (0xc0008226e0) (3) Data frame sent\nI0812 10:55:20.148706 305 log.go:172] (0xc000130840) Data frame received for 3\nI0812 10:55:20.148863 305 log.go:172] (0xc0008226e0) (3) Data frame handling\nI0812 10:55:20.148890 305 log.go:172] (0xc000130840) Data frame received for 5\nI0812 10:55:20.148902 305 log.go:172] (0xc0000ead20) (5) Data frame handling\nI0812 10:55:20.150882 305 log.go:172] (0xc000130840) Data frame received for 1\nI0812 10:55:20.150893 305 log.go:172] (0xc000822640) (1) Data frame handling\nI0812 10:55:20.150902 305 log.go:172] (0xc000822640) (1) Data frame sent\nI0812 10:55:20.150912 305 log.go:172] (0xc000130840) (0xc000822640) Stream removed, broadcasting: 1\nI0812 10:55:20.151064 305 log.go:172] (0xc000130840) Go away received\nI0812 10:55:20.151088 305 log.go:172] (0xc000130840) (0xc000822640) Stream removed, broadcasting: 1\nI0812 10:55:20.151105 305 log.go:172] (0xc000130840) (0xc0008226e0) Stream removed, broadcasting: 3\nI0812 10:55:20.151114 305 log.go:172] (0xc000130840) (0xc0000ead20) Stream removed, broadcasting: 5\n" Aug 12 10:55:20.157: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 10:55:20.157: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 10:55:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 10:55:21.212: INFO: stderr: "I0812 10:55:20.285035 327 log.go:172] (0xc000162580) (0xc0005392c0) Create stream\nI0812 10:55:20.285106 327 log.go:172] (0xc000162580) (0xc0005392c0) Stream added, broadcasting: 1\nI0812 10:55:20.288002 327 log.go:172] (0xc000162580) Reply frame received for 1\nI0812 10:55:20.288038 327 log.go:172] (0xc000162580) (0xc000539360) Create stream\nI0812 10:55:20.288052 327 log.go:172] (0xc000162580) (0xc000539360) Stream added, broadcasting: 3\nI0812 10:55:20.289028 327 log.go:172] (0xc000162580) Reply frame received for 3\nI0812 10:55:20.289061 327 log.go:172] (0xc000162580) (0xc00080c000) Create stream\nI0812 10:55:20.289070 327 log.go:172] (0xc000162580) (0xc00080c000) Stream added, broadcasting: 5\nI0812 10:55:20.289875 327 log.go:172] (0xc000162580) Reply frame received for 5\nI0812 10:55:21.199667 327 log.go:172] (0xc000162580) Data frame received for 3\nI0812 10:55:21.199707 327 log.go:172] (0xc000539360) (3) Data frame handling\nI0812 10:55:21.199737 327 log.go:172] (0xc000539360) (3) Data frame sent\nI0812 10:55:21.200241 327 log.go:172] (0xc000162580) Data frame received for 3\nI0812 10:55:21.200287 327 log.go:172] (0xc000539360) (3) Data frame handling\nI0812 10:55:21.200487 327 log.go:172] (0xc000162580) Data frame received for 5\nI0812 10:55:21.200518 327 log.go:172] (0xc00080c000) (5) Data frame handling\nI0812 10:55:21.202515 327 log.go:172] (0xc000162580) Data frame received for 1\nI0812 10:55:21.202545 327 log.go:172] (0xc0005392c0) (1) Data frame handling\nI0812 10:55:21.202575 327 log.go:172] (0xc0005392c0) (1) Data frame sent\nI0812 10:55:21.202598 327 log.go:172] (0xc000162580) (0xc0005392c0) Stream removed, broadcasting: 1\nI0812 10:55:21.202621 327 log.go:172] (0xc000162580) Go away received\nI0812 10:55:21.202896 327 log.go:172] (0xc000162580) (0xc0005392c0) Stream removed, broadcasting: 1\nI0812 10:55:21.202924 327 log.go:172] (0xc000162580) (0xc000539360) Stream removed, broadcasting: 3\nI0812 10:55:21.202938 327 log.go:172] (0xc000162580) (0xc00080c000) Stream removed, broadcasting: 5\n" Aug 12 10:55:21.213: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 10:55:21.213: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 10:55:21.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 10:55:21.634: INFO: stderr: "I0812 10:55:21.390494 350 log.go:172] (0xc00015c790) (0xc0006b3400) Create stream\nI0812 10:55:21.390547 350 log.go:172] (0xc00015c790) (0xc0006b3400) Stream added, broadcasting: 1\nI0812 10:55:21.392205 350 log.go:172] (0xc00015c790) Reply frame received for 1\nI0812 10:55:21.392240 350 log.go:172] (0xc00015c790) (0xc000366000) Create stream\nI0812 10:55:21.392252 350 log.go:172] (0xc00015c790) (0xc000366000) Stream added, broadcasting: 3\nI0812 10:55:21.392871 350 log.go:172] (0xc00015c790) Reply frame received for 3\nI0812 10:55:21.392894 350 log.go:172] (0xc00015c790) (0xc0005d6000) Create stream\nI0812 10:55:21.392903 350 log.go:172] (0xc00015c790) (0xc0005d6000) Stream added, broadcasting: 5\nI0812 10:55:21.393540 350 log.go:172] (0xc00015c790) Reply frame received for 5\nI0812 10:55:21.627124 350 log.go:172] (0xc00015c790) Data frame received for 3\nI0812 10:55:21.627174 350 log.go:172] (0xc000366000) (3) Data frame handling\nI0812 10:55:21.627211 350 log.go:172] (0xc000366000) (3) Data frame sent\nI0812 10:55:21.627711 350 log.go:172] (0xc00015c790) Data frame received for 5\nI0812 10:55:21.627743 350 log.go:172] (0xc00015c790) Data frame received for 3\nI0812 10:55:21.627759 350 log.go:172] (0xc000366000) (3) Data frame handling\nI0812 10:55:21.627777 350 log.go:172] (0xc0005d6000) (5) Data frame handling\nI0812 10:55:21.629139 350 log.go:172] (0xc00015c790) Data frame received for 1\nI0812 10:55:21.629160 350 log.go:172] (0xc0006b3400) (1) Data frame handling\nI0812 10:55:21.629168 350 log.go:172] (0xc0006b3400) (1) Data frame sent\nI0812 10:55:21.629191 350 log.go:172] (0xc00015c790) (0xc0006b3400) Stream removed, broadcasting: 1\nI0812 10:55:21.629208 350 log.go:172] (0xc00015c790) Go away received\nI0812 10:55:21.629463 350 log.go:172] (0xc00015c790) (0xc0006b3400) Stream removed, broadcasting: 1\nI0812 10:55:21.629475 350 log.go:172] (0xc00015c790) (0xc000366000) Stream removed, broadcasting: 3\nI0812 10:55:21.629480 350 log.go:172] (0xc00015c790) (0xc0005d6000) Stream removed, broadcasting: 5\n" Aug 12 10:55:21.634: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 10:55:21.634: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 10:55:21.634: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 10:55:21.709: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 12 10:55:31.720: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 12 10:55:31.720: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 12 10:55:31.720: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 12 10:55:31.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999787s Aug 12 10:55:32.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992018997s Aug 12 10:55:33.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987004485s Aug 12 10:55:34.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977959261s Aug 12 10:55:35.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973761046s Aug 12 10:55:37.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.93745338s Aug 12 10:55:38.106: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.642705808s Aug 12 10:55:39.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.618111179s Aug 12 10:55:40.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.473301665s Aug 12 10:55:41.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 468.168382ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dxvgc Aug 12 10:55:42.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 10:55:42.609: INFO: stderr: "I0812 10:55:42.555260 372 log.go:172] (0xc000138840) (0xc0005bf360) Create stream\nI0812 10:55:42.555301 372 log.go:172] (0xc000138840) (0xc0005bf360) Stream added, broadcasting: 1\nI0812 10:55:42.557061 372 log.go:172] (0xc000138840) Reply frame received for 1\nI0812 10:55:42.557108 372 log.go:172] (0xc000138840) (0xc0005bf400) Create stream\nI0812 10:55:42.557123 372 log.go:172] (0xc000138840) (0xc0005bf400) Stream added, broadcasting: 3\nI0812 10:55:42.558079 372 log.go:172] (0xc000138840) Reply frame received for 3\nI0812 10:55:42.558115 372 log.go:172] (0xc000138840) (0xc000588000) Create stream\nI0812 10:55:42.558125 372 log.go:172] (0xc000138840) (0xc000588000) Stream added, broadcasting: 5\nI0812 10:55:42.558979 372 log.go:172] (0xc000138840) Reply frame received for 5\nI0812 10:55:42.603199 372 log.go:172] (0xc000138840) Data frame received for 3\nI0812 10:55:42.603236 372 log.go:172] (0xc0005bf400) (3) Data frame handling\nI0812 10:55:42.603264 372 log.go:172] (0xc0005bf400) (3) Data frame sent\nI0812 10:55:42.603283 372 log.go:172] (0xc000138840) Data frame received for 3\nI0812 10:55:42.603296 372 log.go:172] (0xc0005bf400) (3) Data frame handling\nI0812 10:55:42.603324 372 log.go:172] (0xc000138840) Data frame received for 5\nI0812 10:55:42.603363 372 log.go:172] (0xc000588000) (5) Data frame handling\nI0812 10:55:42.605166 372 log.go:172] (0xc000138840) Data frame received for 1\nI0812 10:55:42.605191 372 log.go:172] (0xc0005bf360) (1) Data frame handling\nI0812 10:55:42.605211 372 log.go:172] (0xc0005bf360) (1) Data frame sent\nI0812 10:55:42.605228 372 log.go:172] (0xc000138840) (0xc0005bf360) Stream removed, broadcasting: 1\nI0812 10:55:42.605258 372 log.go:172] (0xc000138840) Go away received\nI0812 10:55:42.605473 372 log.go:172] (0xc000138840) (0xc0005bf360) Stream removed, broadcasting: 1\nI0812 10:55:42.605501 372 log.go:172] (0xc000138840) (0xc0005bf400) Stream removed, broadcasting: 3\nI0812 10:55:42.605520 372 log.go:172] (0xc000138840) (0xc000588000) Stream removed, broadcasting: 5\n" Aug 12 10:55:42.609: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 10:55:42.609: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 10:55:42.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 10:55:42.795: INFO: stderr: "I0812 10:55:42.734068 395 log.go:172] (0xc000138790) (0xc00070c640) Create stream\nI0812 10:55:42.734120 395 log.go:172] (0xc000138790) (0xc00070c640) Stream added, broadcasting: 1\nI0812 10:55:42.736049 395 log.go:172] (0xc000138790) Reply frame received for 1\nI0812 10:55:42.736082 395 log.go:172] (0xc000138790) (0xc00070c6e0) Create stream\nI0812 10:55:42.736091 395 log.go:172] (0xc000138790) (0xc00070c6e0) Stream added, broadcasting: 3\nI0812 10:55:42.736828 395 log.go:172] (0xc000138790) Reply frame received for 3\nI0812 10:55:42.736845 395 log.go:172] (0xc000138790) (0xc00070c780) Create stream\nI0812 10:55:42.736854 395 log.go:172] (0xc000138790) (0xc00070c780) Stream added, broadcasting: 5\nI0812 10:55:42.737490 395 log.go:172] (0xc000138790) Reply frame received for 5\nI0812 10:55:42.789242 395 log.go:172] (0xc000138790) Data frame received for 5\nI0812 10:55:42.789271 395 log.go:172] (0xc00070c780) (5) Data frame handling\nI0812 10:55:42.789290 395 log.go:172] (0xc000138790) Data frame received for 3\nI0812 10:55:42.789298 395 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0812 10:55:42.789311 395 log.go:172] (0xc00070c6e0) (3) Data frame sent\nI0812 10:55:42.789328 395 log.go:172] (0xc000138790) Data frame received for 3\nI0812 10:55:42.789342 395 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0812 10:55:42.790637 395 log.go:172] (0xc000138790) Data frame received for 1\nI0812 10:55:42.790654 395 log.go:172] (0xc00070c640) (1) Data frame handling\nI0812 10:55:42.790667 395 log.go:172] (0xc00070c640) (1) Data frame sent\nI0812 10:55:42.790735 395 log.go:172] (0xc000138790) (0xc00070c640) Stream removed, broadcasting: 1\nI0812 10:55:42.790797 395 log.go:172] (0xc000138790) Go away received\nI0812 10:55:42.790860 395 log.go:172] (0xc000138790) (0xc00070c640) Stream removed, broadcasting: 1\nI0812 10:55:42.790874 395 log.go:172] (0xc000138790) (0xc00070c6e0) Stream removed, broadcasting: 3\nI0812 10:55:42.790882 395 log.go:172] (0xc000138790) (0xc00070c780) Stream removed, broadcasting: 5\n" Aug 12 10:55:42.795: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 10:55:42.795: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 10:55:42.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dxvgc ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 10:55:43.026: INFO: stderr: "I0812 10:55:42.972283 417 log.go:172] (0xc00086a2c0) (0xc000784640) Create stream\nI0812 10:55:42.972324 417 log.go:172] (0xc00086a2c0) (0xc000784640) Stream added, broadcasting: 1\nI0812 10:55:42.973789 417 log.go:172] (0xc00086a2c0) Reply frame received for 1\nI0812 10:55:42.973823 417 log.go:172] (0xc00086a2c0) (0xc000686d20) Create stream\nI0812 10:55:42.973831 417 log.go:172] (0xc00086a2c0) (0xc000686d20) Stream added, broadcasting: 3\nI0812 10:55:42.974394 417 log.go:172] (0xc00086a2c0) Reply frame received for 3\nI0812 10:55:42.974422 417 log.go:172] (0xc00086a2c0) (0xc0005b6000) Create stream\nI0812 10:55:42.974435 417 log.go:172] (0xc00086a2c0) (0xc0005b6000) Stream added, broadcasting: 5\nI0812 10:55:42.974950 417 log.go:172] (0xc00086a2c0) Reply frame received for 5\nI0812 10:55:43.021314 417 log.go:172] (0xc00086a2c0) Data frame received for 5\nI0812 10:55:43.021342 417 log.go:172] (0xc0005b6000) (5) Data frame handling\nI0812 10:55:43.021364 417 log.go:172] (0xc00086a2c0) Data frame received for 3\nI0812 10:55:43.021374 417 log.go:172] (0xc000686d20) (3) Data frame handling\nI0812 10:55:43.021384 417 log.go:172] (0xc000686d20) (3) Data frame sent\nI0812 10:55:43.021393 417 log.go:172] (0xc00086a2c0) Data frame received for 3\nI0812 10:55:43.021400 417 log.go:172] (0xc000686d20) (3) Data frame handling\nI0812 10:55:43.022275 417 log.go:172] (0xc00086a2c0) Data frame received for 1\nI0812 10:55:43.022292 417 log.go:172] (0xc000784640) (1) Data frame handling\nI0812 10:55:43.022304 417 log.go:172] (0xc000784640) (1) Data frame sent\nI0812 10:55:43.022316 417 log.go:172] (0xc00086a2c0) (0xc000784640) Stream removed, broadcasting: 1\nI0812 10:55:43.022332 417 log.go:172] (0xc00086a2c0) Go away received\nI0812 10:55:43.022586 417 log.go:172] (0xc00086a2c0) (0xc000784640) Stream removed, broadcasting: 1\nI0812 10:55:43.022620 417 log.go:172] (0xc00086a2c0) (0xc000686d20) Stream removed, broadcasting: 3\nI0812 10:55:43.022641 417 log.go:172] (0xc00086a2c0) (0xc0005b6000) Stream removed, broadcasting: 5\n" Aug 12 10:55:43.026: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 10:55:43.026: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 10:55:43.026: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 12 10:56:03.043: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dxvgc Aug 12 10:56:03.045: INFO: Scaling statefulset ss to 0 Aug 12 10:56:03.053: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 10:56:03.055: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:56:03.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dxvgc" for this suite. Aug 12 10:56:11.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:56:11.197: INFO: namespace: e2e-tests-statefulset-dxvgc, resource: bindings, ignored listing per whitelist Aug 12 10:56:11.239: INFO: namespace e2e-tests-statefulset-dxvgc deletion completed in 8.168531784s • [SLOW TEST:103.844 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:56:11.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0812 10:56:24.711154 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 12 10:56:24.711: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:56:24.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-k86rd" for this suite. Aug 12 10:56:32.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:56:32.821: INFO: namespace: e2e-tests-gc-k86rd, resource: bindings, ignored listing per whitelist Aug 12 10:56:32.894: INFO: namespace e2e-tests-gc-k86rd deletion completed in 8.179788135s • [SLOW TEST:21.655 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:56:32.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 10:56:32.998: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 12 10:56:33.008: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 12 10:56:38.010: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 12 10:56:38.010: INFO: Creating deployment "test-rolling-update-deployment" Aug 12 10:56:38.012: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 12 10:56:38.035: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 12 10:56:40.042: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 12 10:56:40.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 10:56:42.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 10:56:44.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 10:56:46.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826605, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732826598, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 10:56:48.489: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 12 10:56:48.496: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-w9xxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w9xxx/deployments/test-rolling-update-deployment,UID:7edb2d8e-dc8a-11ea-b2c9-0242ac120008,ResourceVersion:5888164,Generation:1,CreationTimestamp:2020-08-12 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-12 10:56:38 +0000 UTC 2020-08-12 10:56:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-12 10:56:46 +0000 UTC 2020-08-12 10:56:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 12 10:56:48.498: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-w9xxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w9xxx/replicasets/test-rolling-update-deployment-75db98fb4c,UID:7edf7c63-dc8a-11ea-b2c9-0242ac120008,ResourceVersion:5888153,Generation:1,CreationTimestamp:2020-08-12 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7edb2d8e-dc8a-11ea-b2c9-0242ac120008 0xc000e05e57 0xc000e05e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 12 10:56:48.498: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 12 10:56:48.499: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-w9xxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w9xxx/replicasets/test-rolling-update-controller,UID:7bde6ddd-dc8a-11ea-b2c9-0242ac120008,ResourceVersion:5888163,Generation:2,CreationTimestamp:2020-08-12 10:56:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7edb2d8e-dc8a-11ea-b2c9-0242ac120008 0xc000e05d97 0xc000e05d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 10:56:48.501: INFO: Pod "test-rolling-update-deployment-75db98fb4c-p7rgk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-p7rgk,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-w9xxx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w9xxx/pods/test-rolling-update-deployment-75db98fb4c-p7rgk,UID:7eed5ac6-dc8a-11ea-b2c9-0242ac120008,ResourceVersion:5888151,Generation:0,CreationTimestamp:2020-08-12 10:56:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 7edf7c63-dc8a-11ea-b2c9-0242ac120008 0xc0016fedb7 0xc0016fedb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mlz8n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mlz8n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mlz8n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016fee50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016fee70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 10:56:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 10:56:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 10:56:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 10:56:38 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.117,StartTime:2020-08-12 10:56:38 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-12 10:56:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5ed7ed4f25c3df3615bbd48ee2ef6c9a8afb1c3677adb2471512c51d5a27b09b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:56:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-w9xxx" for this suite. Aug 12 10:56:56.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:56:56.544: INFO: namespace: e2e-tests-deployment-w9xxx, resource: bindings, ignored listing per whitelist Aug 12 10:56:56.571: INFO: namespace e2e-tests-deployment-w9xxx deletion completed in 8.068671174s • [SLOW TEST:23.677 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:56:56.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 12 10:57:01.537: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8a07f537-dc8a-11ea-9b9c-0242ac11000c" Aug 12 10:57:01.537: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8a07f537-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-pods-hcwpl" to be "terminated due to deadline exceeded" Aug 12 10:57:01.554: INFO: Pod "pod-update-activedeadlineseconds-8a07f537-dc8a-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 16.2054ms Aug 12 10:57:03.602: INFO: Pod "pod-update-activedeadlineseconds-8a07f537-dc8a-11ea-9b9c-0242ac11000c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.06463461s Aug 12 10:57:03.602: INFO: Pod "pod-update-activedeadlineseconds-8a07f537-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:57:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hcwpl" for this suite. Aug 12 10:57:09.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:57:09.809: INFO: namespace: e2e-tests-pods-hcwpl, resource: bindings, ignored listing per whitelist Aug 12 10:57:09.861: INFO: namespace e2e-tests-pods-hcwpl deletion completed in 6.254955621s • [SLOW TEST:13.289 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:57:09.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 10:57:10.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-4tw8t" to be "success or failure" Aug 12 10:57:10.027: INFO: Pod "downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941625ms Aug 12 10:57:12.030: INFO: Pod "downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00709404s Aug 12 10:57:14.033: INFO: Pod "downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009337394s STEP: Saw pod success Aug 12 10:57:14.033: INFO: Pod "downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:57:14.035: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 10:57:14.533: INFO: Waiting for pod downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:57:14.565: INFO: Pod downwardapi-volume-91ec85a8-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:57:14.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4tw8t" for this suite. Aug 12 10:57:20.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:57:20.739: INFO: namespace: e2e-tests-downward-api-4tw8t, resource: bindings, ignored listing per whitelist Aug 12 10:57:20.740: INFO: namespace e2e-tests-downward-api-4tw8t deletion completed in 6.172181135s • [SLOW TEST:10.879 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:57:20.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 12 10:57:20.951: INFO: Waiting up to 5m0s for pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-9gsjs" to be "success or failure" Aug 12 10:57:20.980: INFO: Pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.431862ms Aug 12 10:57:23.105: INFO: Pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154177134s Aug 12 10:57:25.378: INFO: Pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.426895745s Aug 12 10:57:27.381: INFO: Pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429393145s STEP: Saw pod success Aug 12 10:57:27.381: INFO: Pod "pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:57:27.382: INFO: Trying to get logs from node hunter-worker2 pod pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 10:57:27.478: INFO: Waiting for pod pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:57:27.483: INFO: Pod pod-98711ae1-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:57:27.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9gsjs" for this suite. Aug 12 10:57:33.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:57:33.850: INFO: namespace: e2e-tests-emptydir-9gsjs, resource: bindings, ignored listing per whitelist Aug 12 10:57:33.870: INFO: namespace e2e-tests-emptydir-9gsjs deletion completed in 6.3849933s • [SLOW TEST:13.130 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:57:33.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a0669fb3-dc8a-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 10:57:34.311: INFO: Waiting up to 5m0s for pod "pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-4f55k" to be "success or failure" Aug 12 10:57:34.353: INFO: Pod "pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.587087ms Aug 12 10:57:36.356: INFO: Pod "pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045458384s Aug 12 10:57:38.363: INFO: Pod "pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05217064s STEP: Saw pod success Aug 12 10:57:38.363: INFO: Pod "pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:57:38.365: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 10:57:38.406: INFO: Waiting for pod pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:57:38.417: INFO: Pod pod-secrets-a067075f-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:57:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4f55k" for this suite. Aug 12 10:57:44.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:57:44.486: INFO: namespace: e2e-tests-secrets-4f55k, resource: bindings, ignored listing per whitelist Aug 12 10:57:44.493: INFO: namespace e2e-tests-secrets-4f55k deletion completed in 6.073731059s • [SLOW TEST:10.623 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:57:44.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Aug 12 10:57:44.706: INFO: Waiting up to 5m0s for pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-var-expansion-84pk6" to be "success or failure" Aug 12 10:57:44.712: INFO: Pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.231314ms Aug 12 10:57:46.741: INFO: Pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03484793s Aug 12 10:57:48.795: INFO: Pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.08872874s Aug 12 10:57:50.798: INFO: Pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091721844s STEP: Saw pod success Aug 12 10:57:50.798: INFO: Pod "var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:57:50.799: INFO: Trying to get logs from node hunter-worker pod var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 10:57:50.850: INFO: Waiting for pod var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:57:50.861: INFO: Pod var-expansion-a69a111c-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:57:50.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-84pk6" for this suite. Aug 12 10:57:58.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:57:58.967: INFO: namespace: e2e-tests-var-expansion-84pk6, resource: bindings, ignored listing per whitelist Aug 12 10:57:58.973: INFO: namespace e2e-tests-var-expansion-84pk6 deletion completed in 8.108673096s • [SLOW TEST:14.479 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:57:58.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 12 10:57:59.091: INFO: Waiting up to 5m0s for pod "pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-s88dd" to be "success or failure" Aug 12 10:57:59.110: INFO: Pod "pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.968255ms Aug 12 10:58:01.113: INFO: Pod "pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021933612s Aug 12 10:58:03.116: INFO: Pod "pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025294873s STEP: Saw pod success Aug 12 10:58:03.116: INFO: Pod "pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:58:03.119: INFO: Trying to get logs from node hunter-worker2 pod pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 10:58:03.161: INFO: Waiting for pod pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:58:03.166: INFO: Pod pod-af2dff66-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:58:03.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s88dd" for this suite. Aug 12 10:58:09.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:58:09.200: INFO: namespace: e2e-tests-emptydir-s88dd, resource: bindings, ignored listing per whitelist Aug 12 10:58:09.238: INFO: namespace e2e-tests-emptydir-s88dd deletion completed in 6.070849643s • [SLOW TEST:10.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:58:09.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Aug 12 10:58:09.518: INFO: Waiting up to 5m0s for pod "client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-containers-cfhvl" to be "success or failure" Aug 12 10:58:09.771: INFO: Pod "client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 253.172318ms Aug 12 10:58:11.775: INFO: Pod "client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25711673s Aug 12 10:58:13.778: INFO: Pod "client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260808309s STEP: Saw pod success Aug 12 10:58:13.778: INFO: Pod "client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 10:58:13.781: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 10:58:13.838: INFO: Waiting for pod client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 10:58:13.843: INFO: Pod client-containers-b5656d52-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:58:13.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-cfhvl" for this suite. Aug 12 10:58:23.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:58:23.922: INFO: namespace: e2e-tests-containers-cfhvl, resource: bindings, ignored listing per whitelist Aug 12 10:58:23.933: INFO: namespace e2e-tests-containers-cfhvl deletion completed in 10.08825952s • [SLOW TEST:14.695 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:58:23.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Aug 12 10:58:24.664: INFO: created pod pod-service-account-defaultsa Aug 12 10:58:24.664: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 12 10:58:24.671: INFO: created pod pod-service-account-mountsa Aug 12 10:58:24.671: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 12 10:58:24.695: INFO: created pod pod-service-account-nomountsa Aug 12 10:58:24.695: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 12 10:58:24.783: INFO: created pod pod-service-account-defaultsa-mountspec Aug 12 10:58:24.783: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 12 10:58:24.821: INFO: created pod pod-service-account-mountsa-mountspec Aug 12 10:58:24.821: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 12 10:58:24.921: INFO: created pod pod-service-account-nomountsa-mountspec Aug 12 10:58:24.921: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 12 10:58:24.939: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 12 10:58:24.939: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 12 10:58:24.979: INFO: created pod pod-service-account-mountsa-nomountspec Aug 12 10:58:24.979: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 12 10:58:25.009: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 12 10:58:25.009: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:58:25.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-dtp2x" for this suite. Aug 12 10:59:01.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:59:01.166: INFO: namespace: e2e-tests-svcaccounts-dtp2x, resource: bindings, ignored listing per whitelist Aug 12 10:59:01.224: INFO: namespace e2e-tests-svcaccounts-dtp2x deletion completed in 36.146028449s • [SLOW TEST:37.290 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:59:01.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rqvl8 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 12 10:59:01.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 12 10:59:26.404: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostName&protocol=http&host=10.244.2.63&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rqvl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 10:59:26.405: INFO: >>> kubeConfig: /root/.kube/config I0812 10:59:26.433814 6 log.go:172] (0xc000aec790) (0xc00151a780) Create stream I0812 10:59:26.433842 6 log.go:172] (0xc000aec790) (0xc00151a780) Stream added, broadcasting: 1 I0812 10:59:26.435498 6 log.go:172] (0xc000aec790) Reply frame received for 1 I0812 10:59:26.435533 6 log.go:172] (0xc000aec790) (0xc0012a66e0) Create stream I0812 10:59:26.435545 6 log.go:172] (0xc000aec790) (0xc0012a66e0) Stream added, broadcasting: 3 I0812 10:59:26.436591 6 log.go:172] (0xc000aec790) Reply frame received for 3 I0812 10:59:26.436609 6 log.go:172] (0xc000aec790) (0xc0012a6780) Create stream I0812 10:59:26.436616 6 log.go:172] (0xc000aec790) (0xc0012a6780) Stream added, broadcasting: 5 I0812 10:59:26.437519 6 log.go:172] (0xc000aec790) Reply frame received for 5 I0812 10:59:26.657560 6 log.go:172] (0xc000aec790) Data frame received for 3 I0812 10:59:26.657591 6 log.go:172] (0xc0012a66e0) (3) Data frame handling I0812 10:59:26.657608 6 log.go:172] (0xc0012a66e0) (3) Data frame sent I0812 10:59:26.658070 6 log.go:172] (0xc000aec790) Data frame received for 3 I0812 10:59:26.658119 6 log.go:172] (0xc0012a66e0) (3) Data frame handling I0812 10:59:26.658517 6 log.go:172] (0xc000aec790) Data frame received for 5 I0812 10:59:26.658540 6 log.go:172] (0xc0012a6780) (5) Data frame handling I0812 10:59:26.659762 6 log.go:172] (0xc000aec790) Data frame received for 1 I0812 10:59:26.659792 6 log.go:172] (0xc00151a780) (1) Data frame handling I0812 10:59:26.659816 6 log.go:172] (0xc00151a780) (1) Data frame sent I0812 10:59:26.659844 6 log.go:172] (0xc000aec790) (0xc00151a780) Stream removed, broadcasting: 1 I0812 10:59:26.660124 6 log.go:172] (0xc000aec790) Go away received I0812 10:59:26.660166 6 log.go:172] (0xc000aec790) (0xc00151a780) Stream removed, broadcasting: 1 I0812 10:59:26.660204 6 log.go:172] (0xc000aec790) (0xc0012a66e0) Stream removed, broadcasting: 3 I0812 10:59:26.660245 6 log.go:172] (0xc000aec790) (0xc0012a6780) Stream removed, broadcasting: 5 Aug 12 10:59:26.660: INFO: Waiting for endpoints: map[] Aug 12 10:59:26.663: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.129:8080/dial?request=hostName&protocol=http&host=10.244.1.128&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rqvl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 10:59:26.663: INFO: >>> kubeConfig: /root/.kube/config I0812 10:59:26.689745 6 log.go:172] (0xc000aecc60) (0xc00151aa00) Create stream I0812 10:59:26.689776 6 log.go:172] (0xc000aecc60) (0xc00151aa00) Stream added, broadcasting: 1 I0812 10:59:26.695781 6 log.go:172] (0xc000aecc60) Reply frame received for 1 I0812 10:59:26.695811 6 log.go:172] (0xc000aecc60) (0xc001e880a0) Create stream I0812 10:59:26.695821 6 log.go:172] (0xc000aecc60) (0xc001e880a0) Stream added, broadcasting: 3 I0812 10:59:26.696428 6 log.go:172] (0xc000aecc60) Reply frame received for 3 I0812 10:59:26.696451 6 log.go:172] (0xc000aecc60) (0xc001e88140) Create stream I0812 10:59:26.696460 6 log.go:172] (0xc000aecc60) (0xc001e88140) Stream added, broadcasting: 5 I0812 10:59:26.697144 6 log.go:172] (0xc000aecc60) Reply frame received for 5 I0812 10:59:26.739681 6 log.go:172] (0xc000aecc60) Data frame received for 3 I0812 10:59:26.739706 6 log.go:172] (0xc001e880a0) (3) Data frame handling I0812 10:59:26.739721 6 log.go:172] (0xc001e880a0) (3) Data frame sent I0812 10:59:26.740543 6 log.go:172] (0xc000aecc60) Data frame received for 3 I0812 10:59:26.740567 6 log.go:172] (0xc001e880a0) (3) Data frame handling I0812 10:59:26.740583 6 log.go:172] (0xc000aecc60) Data frame received for 5 I0812 10:59:26.740591 6 log.go:172] (0xc001e88140) (5) Data frame handling I0812 10:59:26.741887 6 log.go:172] (0xc000aecc60) Data frame received for 1 I0812 10:59:26.741909 6 log.go:172] (0xc00151aa00) (1) Data frame handling I0812 10:59:26.741930 6 log.go:172] (0xc00151aa00) (1) Data frame sent I0812 10:59:26.741945 6 log.go:172] (0xc000aecc60) (0xc00151aa00) Stream removed, broadcasting: 1 I0812 10:59:26.741964 6 log.go:172] (0xc000aecc60) Go away received I0812 10:59:26.742064 6 log.go:172] (0xc000aecc60) (0xc00151aa00) Stream removed, broadcasting: 1 I0812 10:59:26.742095 6 log.go:172] (0xc000aecc60) (0xc001e880a0) Stream removed, broadcasting: 3 I0812 10:59:26.742116 6 log.go:172] (0xc000aecc60) (0xc001e88140) Stream removed, broadcasting: 5 Aug 12 10:59:26.742: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 10:59:26.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-rqvl8" for this suite. Aug 12 10:59:50.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 10:59:50.988: INFO: namespace: e2e-tests-pod-network-test-rqvl8, resource: bindings, ignored listing per whitelist Aug 12 10:59:51.001: INFO: namespace e2e-tests-pod-network-test-rqvl8 deletion completed in 24.204086917s • [SLOW TEST:49.777 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 10:59:51.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 10:59:52.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-dgv8z" to be "success or failure" Aug 12 10:59:52.206: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.652637ms Aug 12 10:59:54.209: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004761531s Aug 12 10:59:56.240: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03555644s Aug 12 10:59:58.798: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593709622s Aug 12 11:00:01.079: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875535955s Aug 12 11:00:03.462: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 11.258472644s Aug 12 11:00:05.466: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.262263231s STEP: Saw pod success Aug 12 11:00:05.469: INFO: Pod "downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:00:05.707: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:00:05.778: INFO: Waiting for pod downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c to disappear Aug 12 11:00:05.941: INFO: Pod downwardapi-volume-f237544a-dc8a-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:00:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dgv8z" for this suite. Aug 12 11:00:13.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:00:14.026: INFO: namespace: e2e-tests-projected-dgv8z, resource: bindings, ignored listing per whitelist Aug 12 11:00:14.048: INFO: namespace e2e-tests-projected-dgv8z deletion completed in 8.103394715s • [SLOW TEST:23.046 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:00:14.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-822h STEP: Creating a pod to test atomic-volume-subpath Aug 12 11:00:14.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-822h" in namespace "e2e-tests-subpath-2svxf" to be "success or failure" Aug 12 11:00:14.289: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882433ms Aug 12 11:00:16.764: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486265867s Aug 12 11:00:18.768: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489799319s Aug 12 11:00:20.771: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493051104s Aug 12 11:00:22.776: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.498439055s Aug 12 11:00:24.995: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.717136058s Aug 12 11:00:26.998: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=true. Elapsed: 12.720537564s Aug 12 11:00:29.208: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 14.93006902s Aug 12 11:00:31.211: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 16.932798964s Aug 12 11:00:33.214: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 18.935850931s Aug 12 11:00:35.396: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 21.118593457s Aug 12 11:00:37.400: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 23.121781962s Aug 12 11:00:39.402: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 25.124481626s Aug 12 11:00:41.407: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 27.128889521s Aug 12 11:00:43.410: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Running", Reason="", readiness=false. Elapsed: 29.132474962s Aug 12 11:00:45.415: INFO: Pod "pod-subpath-test-configmap-822h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.137219555s STEP: Saw pod success Aug 12 11:00:45.415: INFO: Pod "pod-subpath-test-configmap-822h" satisfied condition "success or failure" Aug 12 11:00:45.418: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-822h container test-container-subpath-configmap-822h: STEP: delete the pod Aug 12 11:00:45.479: INFO: Waiting for pod pod-subpath-test-configmap-822h to disappear Aug 12 11:00:45.487: INFO: Pod pod-subpath-test-configmap-822h no longer exists STEP: Deleting pod pod-subpath-test-configmap-822h Aug 12 11:00:45.487: INFO: Deleting pod "pod-subpath-test-configmap-822h" in namespace "e2e-tests-subpath-2svxf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:00:45.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2svxf" for this suite. Aug 12 11:00:51.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:00:51.512: INFO: namespace: e2e-tests-subpath-2svxf, resource: bindings, ignored listing per whitelist Aug 12 11:00:51.583: INFO: namespace e2e-tests-subpath-2svxf deletion completed in 6.08948977s • [SLOW TEST:37.535 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:00:51.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 12 11:00:51.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:52.008: INFO: stderr: "" Aug 12 11:00:52.008: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 11:00:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:52.188: INFO: stderr: "" Aug 12 11:00:52.188: INFO: stdout: "update-demo-nautilus-mffxh update-demo-nautilus-rzpxw " Aug 12 11:00:52.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mffxh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:52.273: INFO: stderr: "" Aug 12 11:00:52.273: INFO: stdout: "" Aug 12 11:00:52.273: INFO: update-demo-nautilus-mffxh is created but not running Aug 12 11:00:57.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:57.688: INFO: stderr: "" Aug 12 11:00:57.688: INFO: stdout: "update-demo-nautilus-mffxh update-demo-nautilus-rzpxw " Aug 12 11:00:57.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mffxh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:57.950: INFO: stderr: "" Aug 12 11:00:57.950: INFO: stdout: "true" Aug 12 11:00:57.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mffxh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:58.039: INFO: stderr: "" Aug 12 11:00:58.040: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:00:58.040: INFO: validating pod update-demo-nautilus-mffxh Aug 12 11:00:58.562: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:00:58.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:00:58.562: INFO: update-demo-nautilus-mffxh is verified up and running Aug 12 11:00:58.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzpxw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:58.787: INFO: stderr: "" Aug 12 11:00:58.787: INFO: stdout: "true" Aug 12 11:00:58.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzpxw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:59.003: INFO: stderr: "" Aug 12 11:00:59.003: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:00:59.003: INFO: validating pod update-demo-nautilus-rzpxw Aug 12 11:00:59.060: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:00:59.060: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:00:59.060: INFO: update-demo-nautilus-rzpxw is verified up and running STEP: using delete to clean up resources Aug 12 11:00:59.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:59.181: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:00:59.181: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 12 11:00:59.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-gqkw2' Aug 12 11:00:59.407: INFO: stderr: "No resources found.\n" Aug 12 11:00:59.407: INFO: stdout: "" Aug 12 11:00:59.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-gqkw2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 12 11:00:59.506: INFO: stderr: "" Aug 12 11:00:59.506: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:00:59.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gqkw2" for this suite. Aug 12 11:01:21.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:01:21.556: INFO: namespace: e2e-tests-kubectl-gqkw2, resource: bindings, ignored listing per whitelist Aug 12 11:01:21.588: INFO: namespace e2e-tests-kubectl-gqkw2 deletion completed in 22.079156741s • [SLOW TEST:30.005 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:01:21.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0812 11:02:02.931677 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 12 11:02:02.931: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:02:02.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ptjmt" for this suite. Aug 12 11:02:23.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:02:23.151: INFO: namespace: e2e-tests-gc-ptjmt, resource: bindings, ignored listing per whitelist Aug 12 11:02:23.415: INFO: namespace e2e-tests-gc-ptjmt deletion completed in 20.480266807s • [SLOW TEST:61.826 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:02:23.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4d310f9c-dc8b-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:02:24.643: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-rzjx8" to be "success or failure" Aug 12 11:02:24.980: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 336.588417ms Aug 12 11:02:26.982: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339119803s Aug 12 11:02:29.129: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486147897s Aug 12 11:02:31.132: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489075689s Aug 12 11:02:33.752: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.108967183s Aug 12 11:02:35.755: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.111925864s STEP: Saw pod success Aug 12 11:02:35.755: INFO: Pod "pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:02:35.757: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 11:02:36.060: INFO: Waiting for pod pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c to disappear Aug 12 11:02:36.327: INFO: Pod pod-configmaps-4d7640bf-dc8b-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:02:36.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rzjx8" for this suite. Aug 12 11:02:44.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:02:44.649: INFO: namespace: e2e-tests-configmap-rzjx8, resource: bindings, ignored listing per whitelist Aug 12 11:02:44.658: INFO: namespace e2e-tests-configmap-rzjx8 deletion completed in 8.274297196s • [SLOW TEST:21.243 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:02:44.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 12 11:02:44.913: INFO: Waiting up to 5m0s for pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-dlsfg" to be "success or failure" Aug 12 11:02:44.916: INFO: Pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576779ms Aug 12 11:02:46.955: INFO: Pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042228098s Aug 12 11:02:49.051: INFO: Pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138187858s Aug 12 11:02:51.054: INFO: Pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14069845s STEP: Saw pod success Aug 12 11:02:51.054: INFO: Pod "pod-597655dc-dc8b-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:02:51.056: INFO: Trying to get logs from node hunter-worker pod pod-597655dc-dc8b-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 11:02:51.443: INFO: Waiting for pod pod-597655dc-dc8b-11ea-9b9c-0242ac11000c to disappear Aug 12 11:02:51.454: INFO: Pod pod-597655dc-dc8b-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:02:51.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dlsfg" for this suite. Aug 12 11:02:57.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:02:57.710: INFO: namespace: e2e-tests-emptydir-dlsfg, resource: bindings, ignored listing per whitelist Aug 12 11:02:57.724: INFO: namespace e2e-tests-emptydir-dlsfg deletion completed in 6.261938388s • [SLOW TEST:13.066 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:02:57.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:02:57.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-qtcqp" to be "success or failure" Aug 12 11:02:57.962: INFO: Pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.086259ms Aug 12 11:03:00.081: INFO: Pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156874778s Aug 12 11:03:02.087: INFO: Pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162698937s Aug 12 11:03:04.090: INFO: Pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165175015s STEP: Saw pod success Aug 12 11:03:04.090: INFO: Pod "downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:03:04.091: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:03:04.349: INFO: Waiting for pod downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c to disappear Aug 12 11:03:04.429: INFO: Pod downwardapi-volume-614b4ec6-dc8b-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:03:04.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qtcqp" for this suite. Aug 12 11:03:10.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:03:10.537: INFO: namespace: e2e-tests-projected-qtcqp, resource: bindings, ignored listing per whitelist Aug 12 11:03:10.543: INFO: namespace e2e-tests-projected-qtcqp deletion completed in 6.110166683s • [SLOW TEST:12.819 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:03:10.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 12 11:03:20.737: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:20.837: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:22.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:22.840: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:24.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:24.840: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:26.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:26.842: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:28.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:28.842: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:30.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:30.840: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:32.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:32.957: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:34.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:34.841: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:36.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:36.903: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:38.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:38.841: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:40.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:40.841: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:42.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:42.841: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:44.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:44.840: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:46.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:46.860: INFO: Pod pod-with-poststart-exec-hook still exists Aug 12 11:03:48.837: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 12 11:03:48.855: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:03:48.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9w2hd" for this suite. Aug 12 11:04:12.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:04:12.960: INFO: namespace: e2e-tests-container-lifecycle-hook-9w2hd, resource: bindings, ignored listing per whitelist Aug 12 11:04:12.982: INFO: namespace e2e-tests-container-lifecycle-hook-9w2hd deletion completed in 24.124014953s • [SLOW TEST:62.439 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:04:12.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Aug 12 11:04:19.242: INFO: Pod pod-hostip-8e21cac1-dc8b-11ea-9b9c-0242ac11000c has hostIP: 172.18.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:04:19.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fv7q6" for this suite. Aug 12 11:04:41.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:04:41.611: INFO: namespace: e2e-tests-pods-fv7q6, resource: bindings, ignored listing per whitelist Aug 12 11:04:41.633: INFO: namespace e2e-tests-pods-fv7q6 deletion completed in 22.386910887s • [SLOW TEST:28.650 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:04:41.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 12 11:04:53.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:04:53.835: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:04:55.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:04:55.839: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:04:57.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:04:57.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:04:59.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:04:59.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:01.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:01.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:03.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:03.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:05.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:05.839: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:07.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:07.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:09.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:09.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:11.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:11.839: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:13.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:13.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:15.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:15.838: INFO: Pod pod-with-prestop-exec-hook still exists Aug 12 11:05:17.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 12 11:05:17.838: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:05:17.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7cmcg" for this suite. Aug 12 11:05:39.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:05:39.917: INFO: namespace: e2e-tests-container-lifecycle-hook-7cmcg, resource: bindings, ignored listing per whitelist Aug 12 11:05:39.940: INFO: namespace e2e-tests-container-lifecycle-hook-7cmcg deletion completed in 22.091963966s • [SLOW TEST:58.307 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:05:39.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c Aug 12 11:05:40.062: INFO: Pod name my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c: Found 0 pods out of 1 Aug 12 11:05:45.065: INFO: Pod name my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c: Found 1 pods out of 1 Aug 12 11:05:45.065: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c" are running Aug 12 11:05:45.067: INFO: Pod "my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c-q4flb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 11:05:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 11:05:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 11:05:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 11:05:40 +0000 UTC Reason: Message:}]) Aug 12 11:05:45.067: INFO: Trying to dial the pod Aug 12 11:05:50.090: INFO: Controller my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c: Got expected result from replica 1 [my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c-q4flb]: "my-hostname-basic-c1ee8377-dc8b-11ea-9b9c-0242ac11000c-q4flb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:05:50.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-ds9kj" for this suite. Aug 12 11:05:56.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:05:56.145: INFO: namespace: e2e-tests-replication-controller-ds9kj, resource: bindings, ignored listing per whitelist Aug 12 11:05:56.173: INFO: namespace e2e-tests-replication-controller-ds9kj deletion completed in 6.079243393s • [SLOW TEST:16.233 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:05:56.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:05:56.257: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:06:00.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tr56c" for this suite. Aug 12 11:06:48.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:06:48.688: INFO: namespace: e2e-tests-pods-tr56c, resource: bindings, ignored listing per whitelist Aug 12 11:06:48.705: INFO: namespace e2e-tests-pods-tr56c deletion completed in 48.285930345s • [SLOW TEST:52.531 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:06:48.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:06:49.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-f8d6f" to be "success or failure" Aug 12 11:06:49.540: INFO: Pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 90.847399ms Aug 12 11:06:51.600: INFO: Pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150462098s Aug 12 11:06:53.602: INFO: Pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.152859455s Aug 12 11:06:55.605: INFO: Pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.156245815s STEP: Saw pod success Aug 12 11:06:55.605: INFO: Pod "downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:06:55.608: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:06:55.679: INFO: Waiting for pod downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c to disappear Aug 12 11:06:55.699: INFO: Pod downwardapi-volume-eb4986a9-dc8b-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:06:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f8d6f" for this suite. Aug 12 11:07:03.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:07:03.748: INFO: namespace: e2e-tests-downward-api-f8d6f, resource: bindings, ignored listing per whitelist Aug 12 11:07:03.781: INFO: namespace e2e-tests-downward-api-f8d6f deletion completed in 8.078754477s • [SLOW TEST:15.076 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:07:03.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 12 11:07:04.162: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891235,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 12 11:07:04.163: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891235,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 12 11:07:14.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891254,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 12 11:07:14.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891254,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 12 11:07:24.175: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891274,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 12 11:07:24.175: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891274,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 12 11:07:34.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891294,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 12 11:07:34.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-a,UID:f4115681-dc8b-11ea-b2c9-0242ac120008,ResourceVersion:5891294,Generation:0,CreationTimestamp:2020-08-12 11:07:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 12 11:07:44.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-b,UID:0bf8556f-dc8c-11ea-b2c9-0242ac120008,ResourceVersion:5891314,Generation:0,CreationTimestamp:2020-08-12 11:07:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 12 11:07:44.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-b,UID:0bf8556f-dc8c-11ea-b2c9-0242ac120008,ResourceVersion:5891314,Generation:0,CreationTimestamp:2020-08-12 11:07:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 12 11:07:54.316: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-b,UID:0bf8556f-dc8c-11ea-b2c9-0242ac120008,ResourceVersion:5891333,Generation:0,CreationTimestamp:2020-08-12 11:07:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 12 11:07:54.316: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7xfnd,SelfLink:/api/v1/namespaces/e2e-tests-watch-7xfnd/configmaps/e2e-watch-test-configmap-b,UID:0bf8556f-dc8c-11ea-b2c9-0242ac120008,ResourceVersion:5891333,Generation:0,CreationTimestamp:2020-08-12 11:07:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:08:04.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7xfnd" for this suite. Aug 12 11:08:10.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:08:10.712: INFO: namespace: e2e-tests-watch-7xfnd, resource: bindings, ignored listing per whitelist Aug 12 11:08:10.721: INFO: namespace e2e-tests-watch-7xfnd deletion completed in 6.401613748s • [SLOW TEST:66.940 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:08:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:08:11.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-wmbhm" to be "success or failure" Aug 12 11:08:11.150: INFO: Pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.677944ms Aug 12 11:08:13.309: INFO: Pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182122362s Aug 12 11:08:15.313: INFO: Pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185742782s Aug 12 11:08:17.315: INFO: Pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188096016s STEP: Saw pod success Aug 12 11:08:17.315: INFO: Pod "downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:08:17.317: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:08:17.420: INFO: Waiting for pod downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c to disappear Aug 12 11:08:17.853: INFO: Pod downwardapi-volume-1bf9d525-dc8c-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:08:17.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wmbhm" for this suite. Aug 12 11:08:23.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:08:23.981: INFO: namespace: e2e-tests-downward-api-wmbhm, resource: bindings, ignored listing per whitelist Aug 12 11:08:24.029: INFO: namespace e2e-tests-downward-api-wmbhm deletion completed in 6.172408242s • [SLOW TEST:13.308 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:08:24.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 12 11:08:24.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:27.070: INFO: stderr: "" Aug 12 11:08:27.070: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 11:08:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:27.202: INFO: stderr: "" Aug 12 11:08:27.202: INFO: stdout: "update-demo-nautilus-mrnnk update-demo-nautilus-x9rz2 " Aug 12 11:08:27.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrnnk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:27.312: INFO: stderr: "" Aug 12 11:08:27.312: INFO: stdout: "" Aug 12 11:08:27.312: INFO: update-demo-nautilus-mrnnk is created but not running Aug 12 11:08:32.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:32.500: INFO: stderr: "" Aug 12 11:08:32.500: INFO: stdout: "update-demo-nautilus-mrnnk update-demo-nautilus-x9rz2 " Aug 12 11:08:32.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrnnk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:32.655: INFO: stderr: "" Aug 12 11:08:32.655: INFO: stdout: "true" Aug 12 11:08:32.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrnnk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:32.746: INFO: stderr: "" Aug 12 11:08:32.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:08:32.746: INFO: validating pod update-demo-nautilus-mrnnk Aug 12 11:08:32.776: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:08:32.776: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:08:32.776: INFO: update-demo-nautilus-mrnnk is verified up and running Aug 12 11:08:32.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:32.868: INFO: stderr: "" Aug 12 11:08:32.868: INFO: stdout: "true" Aug 12 11:08:32.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:33.062: INFO: stderr: "" Aug 12 11:08:33.062: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:08:33.062: INFO: validating pod update-demo-nautilus-x9rz2 Aug 12 11:08:33.164: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:08:33.164: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:08:33.164: INFO: update-demo-nautilus-x9rz2 is verified up and running STEP: scaling down the replication controller Aug 12 11:08:33.166: INFO: scanned /root for discovery docs: Aug 12 11:08:33.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:34.375: INFO: stderr: "" Aug 12 11:08:34.375: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 11:08:34.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:34.475: INFO: stderr: "" Aug 12 11:08:34.475: INFO: stdout: "update-demo-nautilus-mrnnk update-demo-nautilus-x9rz2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 12 11:08:39.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:39.576: INFO: stderr: "" Aug 12 11:08:39.576: INFO: stdout: "update-demo-nautilus-mrnnk update-demo-nautilus-x9rz2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 12 11:08:44.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:44.667: INFO: stderr: "" Aug 12 11:08:44.667: INFO: stdout: "update-demo-nautilus-mrnnk update-demo-nautilus-x9rz2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 12 11:08:49.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:49.772: INFO: stderr: "" Aug 12 11:08:49.772: INFO: stdout: "update-demo-nautilus-x9rz2 " Aug 12 11:08:49.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:49.870: INFO: stderr: "" Aug 12 11:08:49.870: INFO: stdout: "true" Aug 12 11:08:49.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:49.958: INFO: stderr: "" Aug 12 11:08:49.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:08:49.958: INFO: validating pod update-demo-nautilus-x9rz2 Aug 12 11:08:49.960: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:08:49.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:08:49.960: INFO: update-demo-nautilus-x9rz2 is verified up and running STEP: scaling up the replication controller Aug 12 11:08:49.961: INFO: scanned /root for discovery docs: Aug 12 11:08:49.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:51.218: INFO: stderr: "" Aug 12 11:08:51.218: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 11:08:51.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:51.316: INFO: stderr: "" Aug 12 11:08:51.316: INFO: stdout: "update-demo-nautilus-nhc6z update-demo-nautilus-x9rz2 " Aug 12 11:08:51.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:51.412: INFO: stderr: "" Aug 12 11:08:51.412: INFO: stdout: "" Aug 12 11:08:51.412: INFO: update-demo-nautilus-nhc6z is created but not running Aug 12 11:08:56.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:56.523: INFO: stderr: "" Aug 12 11:08:56.523: INFO: stdout: "update-demo-nautilus-nhc6z update-demo-nautilus-x9rz2 " Aug 12 11:08:56.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhc6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:56.609: INFO: stderr: "" Aug 12 11:08:56.609: INFO: stdout: "true" Aug 12 11:08:56.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhc6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:56.699: INFO: stderr: "" Aug 12 11:08:56.699: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:08:56.699: INFO: validating pod update-demo-nautilus-nhc6z Aug 12 11:08:56.702: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:08:56.702: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:08:56.702: INFO: update-demo-nautilus-nhc6z is verified up and running Aug 12 11:08:56.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:56.851: INFO: stderr: "" Aug 12 11:08:56.851: INFO: stdout: "true" Aug 12 11:08:56.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x9rz2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:56.954: INFO: stderr: "" Aug 12 11:08:56.955: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 11:08:56.955: INFO: validating pod update-demo-nautilus-x9rz2 Aug 12 11:08:56.959: INFO: got data: { "image": "nautilus.jpg" } Aug 12 11:08:56.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 11:08:56.959: INFO: update-demo-nautilus-x9rz2 is verified up and running STEP: using delete to clean up resources Aug 12 11:08:56.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:57.138: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:08:57.138: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 12 11:08:57.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tjm72' Aug 12 11:08:57.515: INFO: stderr: "No resources found.\n" Aug 12 11:08:57.515: INFO: stdout: "" Aug 12 11:08:57.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tjm72 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 12 11:08:57.691: INFO: stderr: "" Aug 12 11:08:57.691: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:08:57.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tjm72" for this suite. Aug 12 11:09:19.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:09:19.932: INFO: namespace: e2e-tests-kubectl-tjm72, resource: bindings, ignored listing per whitelist Aug 12 11:09:19.941: INFO: namespace e2e-tests-kubectl-tjm72 deletion completed in 22.246258611s • [SLOW TEST:55.912 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:09:19.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-451f97ba-dc8c-11ea-9b9c-0242ac11000c STEP: Creating secret with name s-test-opt-upd-451f9829-dc8c-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-451f97ba-dc8c-11ea-9b9c-0242ac11000c STEP: Updating secret s-test-opt-upd-451f9829-dc8c-11ea-9b9c-0242ac11000c STEP: Creating secret with name s-test-opt-create-451f9853-dc8c-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:10:50.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lvdn5" for this suite. Aug 12 11:11:14.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:11:14.868: INFO: namespace: e2e-tests-projected-lvdn5, resource: bindings, ignored listing per whitelist Aug 12 11:11:14.876: INFO: namespace e2e-tests-projected-lvdn5 deletion completed in 24.082990724s • [SLOW TEST:114.934 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:11:14.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 11:11:14.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-99qhz' Aug 12 11:11:15.079: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 12 11:11:15.079: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Aug 12 11:11:19.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-99qhz' Aug 12 11:11:19.274: INFO: stderr: "" Aug 12 11:11:19.274: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:11:19.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-99qhz" for this suite. Aug 12 11:11:31.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:11:31.309: INFO: namespace: e2e-tests-kubectl-99qhz, resource: bindings, ignored listing per whitelist Aug 12 11:11:31.339: INFO: namespace e2e-tests-kubectl-99qhz deletion completed in 12.057051945s • [SLOW TEST:16.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:11:31.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 12 11:11:31.449: INFO: PodSpec: initContainers in spec.initContainers Aug 12 11:12:25.612: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-936313bd-dc8c-11ea-9b9c-0242ac11000c", GenerateName:"", Namespace:"e2e-tests-init-container-9m89f", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-9m89f/pods/pod-init-936313bd-dc8c-11ea-9b9c-0242ac11000c", UID:"9364f487-dc8c-11ea-b2c9-0242ac120008", ResourceVersion:"5892106", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732827491, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"449752358"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-c5wbd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000dbc400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c5wbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c5wbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c5wbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001eb5008), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001363b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb54e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb5500)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001eb5508), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001eb550c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732827491, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732827491, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732827491, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732827491, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.94", StartTime:(*v1.Time)(0xc001e0f540), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001e0f6a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0011c1b20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://888aa629a1c99866dc9ce31676e7bbe01e3a68e9b76e9c405de738a4cbfad33e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e0f700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e0f620), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:12:25.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9m89f" for this suite. Aug 12 11:12:55.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:12:55.792: INFO: namespace: e2e-tests-init-container-9m89f, resource: bindings, ignored listing per whitelist Aug 12 11:12:55.824: INFO: namespace e2e-tests-init-container-9m89f deletion completed in 30.193438156s • [SLOW TEST:84.484 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:12:55.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c5cfea11-dc8c-11ea-9b9c-0242ac11000c STEP: Creating configMap with name cm-test-opt-upd-c5cfea67-dc8c-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c5cfea11-dc8c-11ea-9b9c-0242ac11000c STEP: Updating configmap cm-test-opt-upd-c5cfea67-dc8c-11ea-9b9c-0242ac11000c STEP: Creating configMap with name cm-test-opt-create-c5cfea8d-dc8c-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:14:38.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r96p8" for this suite. Aug 12 11:15:00.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:15:00.498: INFO: namespace: e2e-tests-projected-r96p8, resource: bindings, ignored listing per whitelist Aug 12 11:15:00.527: INFO: namespace e2e-tests-projected-r96p8 deletion completed in 22.159964653s • [SLOW TEST:124.703 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:15:00.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hh4pq [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-hh4pq STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-hh4pq Aug 12 11:15:00.666: INFO: Found 0 stateful pods, waiting for 1 Aug 12 11:15:10.670: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 12 11:15:10.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 11:15:10.929: INFO: stderr: "I0812 11:15:10.802877 1304 log.go:172] (0xc0008da210) (0xc0008d65a0) Create stream\nI0812 11:15:10.802931 1304 log.go:172] (0xc0008da210) (0xc0008d65a0) Stream added, broadcasting: 1\nI0812 11:15:10.810767 1304 log.go:172] (0xc0008da210) Reply frame received for 1\nI0812 11:15:10.810807 1304 log.go:172] (0xc0008da210) (0xc000736000) Create stream\nI0812 11:15:10.810835 1304 log.go:172] (0xc0008da210) (0xc000736000) Stream added, broadcasting: 3\nI0812 11:15:10.811914 1304 log.go:172] (0xc0008da210) Reply frame received for 3\nI0812 11:15:10.811940 1304 log.go:172] (0xc0008da210) (0xc000736140) Create stream\nI0812 11:15:10.811950 1304 log.go:172] (0xc0008da210) (0xc000736140) Stream added, broadcasting: 5\nI0812 11:15:10.812460 1304 log.go:172] (0xc0008da210) Reply frame received for 5\nI0812 11:15:10.921159 1304 log.go:172] (0xc0008da210) Data frame received for 5\nI0812 11:15:10.921194 1304 log.go:172] (0xc000736140) (5) Data frame handling\nI0812 11:15:10.921233 1304 log.go:172] (0xc0008da210) Data frame received for 3\nI0812 11:15:10.921257 1304 log.go:172] (0xc000736000) (3) Data frame handling\nI0812 11:15:10.921282 1304 log.go:172] (0xc000736000) (3) Data frame sent\nI0812 11:15:10.921316 1304 log.go:172] (0xc0008da210) Data frame received for 3\nI0812 11:15:10.921353 1304 log.go:172] (0xc000736000) (3) Data frame handling\nI0812 11:15:10.922457 1304 log.go:172] (0xc0008da210) Data frame received for 1\nI0812 11:15:10.922469 1304 log.go:172] (0xc0008d65a0) (1) Data frame handling\nI0812 11:15:10.922480 1304 log.go:172] (0xc0008d65a0) (1) Data frame sent\nI0812 11:15:10.922487 1304 log.go:172] (0xc0008da210) (0xc0008d65a0) Stream removed, broadcasting: 1\nI0812 11:15:10.922493 1304 log.go:172] (0xc0008da210) Go away received\nI0812 11:15:10.922661 1304 log.go:172] (0xc0008da210) (0xc0008d65a0) Stream removed, broadcasting: 1\nI0812 11:15:10.922683 1304 log.go:172] (0xc0008da210) (0xc000736000) Stream removed, broadcasting: 3\nI0812 11:15:10.922695 1304 log.go:172] (0xc0008da210) (0xc000736140) Stream removed, broadcasting: 5\n" Aug 12 11:15:10.929: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 11:15:10.929: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 11:15:10.932: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 12 11:15:20.936: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 12 11:15:20.936: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 11:15:21.012: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:21.012: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:21.012: INFO: Aug 12 11:15:21.012: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 12 11:15:22.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.932555516s Aug 12 11:15:23.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.890642869s Aug 12 11:15:24.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.74622576s Aug 12 11:15:25.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.194675668s Aug 12 11:15:27.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.139883234s Aug 12 11:15:28.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.779174251s Aug 12 11:15:29.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.773027295s Aug 12 11:15:30.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 769.550974ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-hh4pq Aug 12 11:15:31.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:15:31.358: INFO: stderr: "I0812 11:15:31.290679 1327 log.go:172] (0xc000154840) (0xc000736640) Create stream\nI0812 11:15:31.290718 1327 log.go:172] (0xc000154840) (0xc000736640) Stream added, broadcasting: 1\nI0812 11:15:31.292510 1327 log.go:172] (0xc000154840) Reply frame received for 1\nI0812 11:15:31.292534 1327 log.go:172] (0xc000154840) (0xc0007366e0) Create stream\nI0812 11:15:31.292540 1327 log.go:172] (0xc000154840) (0xc0007366e0) Stream added, broadcasting: 3\nI0812 11:15:31.293327 1327 log.go:172] (0xc000154840) Reply frame received for 3\nI0812 11:15:31.293371 1327 log.go:172] (0xc000154840) (0xc0002cac80) Create stream\nI0812 11:15:31.293397 1327 log.go:172] (0xc000154840) (0xc0002cac80) Stream added, broadcasting: 5\nI0812 11:15:31.294177 1327 log.go:172] (0xc000154840) Reply frame received for 5\nI0812 11:15:31.352434 1327 log.go:172] (0xc000154840) Data frame received for 5\nI0812 11:15:31.352473 1327 log.go:172] (0xc0002cac80) (5) Data frame handling\nI0812 11:15:31.352501 1327 log.go:172] (0xc000154840) Data frame received for 3\nI0812 11:15:31.352513 1327 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0812 11:15:31.352527 1327 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0812 11:15:31.352543 1327 log.go:172] (0xc000154840) Data frame received for 3\nI0812 11:15:31.352566 1327 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0812 11:15:31.353609 1327 log.go:172] (0xc000154840) Data frame received for 1\nI0812 11:15:31.353639 1327 log.go:172] (0xc000736640) (1) Data frame handling\nI0812 11:15:31.353652 1327 log.go:172] (0xc000736640) (1) Data frame sent\nI0812 11:15:31.353667 1327 log.go:172] (0xc000154840) (0xc000736640) Stream removed, broadcasting: 1\nI0812 11:15:31.353689 1327 log.go:172] (0xc000154840) Go away received\nI0812 11:15:31.353904 1327 log.go:172] (0xc000154840) (0xc000736640) Stream removed, broadcasting: 1\nI0812 11:15:31.353929 1327 log.go:172] (0xc000154840) (0xc0007366e0) Stream removed, broadcasting: 3\nI0812 11:15:31.353945 1327 log.go:172] (0xc000154840) (0xc0002cac80) Stream removed, broadcasting: 5\n" Aug 12 11:15:31.358: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 11:15:31.358: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 11:15:31.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:15:31.558: INFO: stderr: "I0812 11:15:31.481818 1349 log.go:172] (0xc000138840) (0xc0006ab4a0) Create stream\nI0812 11:15:31.481862 1349 log.go:172] (0xc000138840) (0xc0006ab4a0) Stream added, broadcasting: 1\nI0812 11:15:31.483390 1349 log.go:172] (0xc000138840) Reply frame received for 1\nI0812 11:15:31.483430 1349 log.go:172] (0xc000138840) (0xc0006a8000) Create stream\nI0812 11:15:31.483442 1349 log.go:172] (0xc000138840) (0xc0006a8000) Stream added, broadcasting: 3\nI0812 11:15:31.484213 1349 log.go:172] (0xc000138840) Reply frame received for 3\nI0812 11:15:31.484252 1349 log.go:172] (0xc000138840) (0xc00071c000) Create stream\nI0812 11:15:31.484266 1349 log.go:172] (0xc000138840) (0xc00071c000) Stream added, broadcasting: 5\nI0812 11:15:31.485187 1349 log.go:172] (0xc000138840) Reply frame received for 5\nI0812 11:15:31.552983 1349 log.go:172] (0xc000138840) Data frame received for 3\nI0812 11:15:31.553018 1349 log.go:172] (0xc000138840) Data frame received for 5\nI0812 11:15:31.553041 1349 log.go:172] (0xc00071c000) (5) Data frame handling\nI0812 11:15:31.553052 1349 log.go:172] (0xc00071c000) (5) Data frame sent\nI0812 11:15:31.553061 1349 log.go:172] (0xc000138840) Data frame received for 5\nI0812 11:15:31.553069 1349 log.go:172] (0xc00071c000) (5) Data frame handling\nI0812 11:15:31.553083 1349 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0812 11:15:31.553093 1349 log.go:172] (0xc0006a8000) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0812 11:15:31.553099 1349 log.go:172] (0xc000138840) Data frame received for 3\nI0812 11:15:31.553105 1349 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0812 11:15:31.554104 1349 log.go:172] (0xc000138840) Data frame received for 1\nI0812 11:15:31.554119 1349 log.go:172] (0xc0006ab4a0) (1) Data frame handling\nI0812 11:15:31.554127 1349 log.go:172] (0xc0006ab4a0) (1) Data frame sent\nI0812 11:15:31.554137 1349 log.go:172] (0xc000138840) (0xc0006ab4a0) Stream removed, broadcasting: 1\nI0812 11:15:31.554149 1349 log.go:172] (0xc000138840) Go away received\nI0812 11:15:31.554356 1349 log.go:172] (0xc000138840) (0xc0006ab4a0) Stream removed, broadcasting: 1\nI0812 11:15:31.554375 1349 log.go:172] (0xc000138840) (0xc0006a8000) Stream removed, broadcasting: 3\nI0812 11:15:31.554383 1349 log.go:172] (0xc000138840) (0xc00071c000) Stream removed, broadcasting: 5\n" Aug 12 11:15:31.558: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 11:15:31.558: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 11:15:31.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:15:31.736: INFO: stderr: "I0812 11:15:31.670850 1371 log.go:172] (0xc0007ca420) (0xc0002e7540) Create stream\nI0812 11:15:31.670899 1371 log.go:172] (0xc0007ca420) (0xc0002e7540) Stream added, broadcasting: 1\nI0812 11:15:31.673189 1371 log.go:172] (0xc0007ca420) Reply frame received for 1\nI0812 11:15:31.673235 1371 log.go:172] (0xc0007ca420) (0xc0004a8000) Create stream\nI0812 11:15:31.673260 1371 log.go:172] (0xc0007ca420) (0xc0004a8000) Stream added, broadcasting: 3\nI0812 11:15:31.673982 1371 log.go:172] (0xc0007ca420) Reply frame received for 3\nI0812 11:15:31.674022 1371 log.go:172] (0xc0007ca420) (0xc0002e75e0) Create stream\nI0812 11:15:31.674041 1371 log.go:172] (0xc0007ca420) (0xc0002e75e0) Stream added, broadcasting: 5\nI0812 11:15:31.674811 1371 log.go:172] (0xc0007ca420) Reply frame received for 5\nI0812 11:15:31.730093 1371 log.go:172] (0xc0007ca420) Data frame received for 3\nI0812 11:15:31.730119 1371 log.go:172] (0xc0004a8000) (3) Data frame handling\nI0812 11:15:31.730145 1371 log.go:172] (0xc0007ca420) Data frame received for 5\nI0812 11:15:31.730182 1371 log.go:172] (0xc0002e75e0) (5) Data frame handling\nI0812 11:15:31.730203 1371 log.go:172] (0xc0002e75e0) (5) Data frame sent\nI0812 11:15:31.730236 1371 log.go:172] (0xc0007ca420) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0812 11:15:31.730263 1371 log.go:172] (0xc0002e75e0) (5) Data frame handling\nI0812 11:15:31.730293 1371 log.go:172] (0xc0004a8000) (3) Data frame sent\nI0812 11:15:31.730315 1371 log.go:172] (0xc0007ca420) Data frame received for 3\nI0812 11:15:31.730334 1371 log.go:172] (0xc0004a8000) (3) Data frame handling\nI0812 11:15:31.730857 1371 log.go:172] (0xc0007ca420) Data frame received for 1\nI0812 11:15:31.730884 1371 log.go:172] (0xc0002e7540) (1) Data frame handling\nI0812 11:15:31.730896 1371 log.go:172] (0xc0002e7540) (1) Data frame sent\nI0812 11:15:31.730912 1371 log.go:172] (0xc0007ca420) (0xc0002e7540) Stream removed, broadcasting: 1\nI0812 11:15:31.730964 1371 log.go:172] (0xc0007ca420) Go away received\nI0812 11:15:31.731123 1371 log.go:172] (0xc0007ca420) (0xc0002e7540) Stream removed, broadcasting: 1\nI0812 11:15:31.731141 1371 log.go:172] (0xc0007ca420) (0xc0004a8000) Stream removed, broadcasting: 3\nI0812 11:15:31.731152 1371 log.go:172] (0xc0007ca420) (0xc0002e75e0) Stream removed, broadcasting: 5\n" Aug 12 11:15:31.736: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 12 11:15:31.736: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 12 11:15:31.739: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 12 11:15:41.993: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:15:41.993: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:15:41.993: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 12 11:15:41.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 11:15:42.338: INFO: stderr: "I0812 11:15:42.281550 1392 log.go:172] (0xc00001e2c0) (0xc00082f9a0) Create stream\nI0812 11:15:42.281593 1392 log.go:172] (0xc00001e2c0) (0xc00082f9a0) Stream added, broadcasting: 1\nI0812 11:15:42.284028 1392 log.go:172] (0xc00001e2c0) Reply frame received for 1\nI0812 11:15:42.284061 1392 log.go:172] (0xc00001e2c0) (0xc0001b8be0) Create stream\nI0812 11:15:42.284073 1392 log.go:172] (0xc00001e2c0) (0xc0001b8be0) Stream added, broadcasting: 3\nI0812 11:15:42.284622 1392 log.go:172] (0xc00001e2c0) Reply frame received for 3\nI0812 11:15:42.284657 1392 log.go:172] (0xc00001e2c0) (0xc000174000) Create stream\nI0812 11:15:42.284673 1392 log.go:172] (0xc00001e2c0) (0xc000174000) Stream added, broadcasting: 5\nI0812 11:15:42.285331 1392 log.go:172] (0xc00001e2c0) Reply frame received for 5\nI0812 11:15:42.332946 1392 log.go:172] (0xc00001e2c0) Data frame received for 5\nI0812 11:15:42.332986 1392 log.go:172] (0xc000174000) (5) Data frame handling\nI0812 11:15:42.333015 1392 log.go:172] (0xc00001e2c0) Data frame received for 3\nI0812 11:15:42.333034 1392 log.go:172] (0xc0001b8be0) (3) Data frame handling\nI0812 11:15:42.333049 1392 log.go:172] (0xc0001b8be0) (3) Data frame sent\nI0812 11:15:42.333068 1392 log.go:172] (0xc00001e2c0) Data frame received for 3\nI0812 11:15:42.333079 1392 log.go:172] (0xc0001b8be0) (3) Data frame handling\nI0812 11:15:42.334129 1392 log.go:172] (0xc00001e2c0) Data frame received for 1\nI0812 11:15:42.334148 1392 log.go:172] (0xc00082f9a0) (1) Data frame handling\nI0812 11:15:42.334160 1392 log.go:172] (0xc00082f9a0) (1) Data frame sent\nI0812 11:15:42.334177 1392 log.go:172] (0xc00001e2c0) (0xc00082f9a0) Stream removed, broadcasting: 1\nI0812 11:15:42.334191 1392 log.go:172] (0xc00001e2c0) Go away received\nI0812 11:15:42.334386 1392 log.go:172] (0xc00001e2c0) (0xc00082f9a0) Stream removed, broadcasting: 1\nI0812 11:15:42.334402 1392 log.go:172] (0xc00001e2c0) (0xc0001b8be0) Stream removed, broadcasting: 3\nI0812 11:15:42.334410 1392 log.go:172] (0xc00001e2c0) (0xc000174000) Stream removed, broadcasting: 5\n" Aug 12 11:15:42.339: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 11:15:42.339: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 11:15:42.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 11:15:42.843: INFO: stderr: "I0812 11:15:42.452332 1414 log.go:172] (0xc000138630) (0xc00070e640) Create stream\nI0812 11:15:42.452370 1414 log.go:172] (0xc000138630) (0xc00070e640) Stream added, broadcasting: 1\nI0812 11:15:42.454107 1414 log.go:172] (0xc000138630) Reply frame received for 1\nI0812 11:15:42.454133 1414 log.go:172] (0xc000138630) (0xc0005c2e60) Create stream\nI0812 11:15:42.454142 1414 log.go:172] (0xc000138630) (0xc0005c2e60) Stream added, broadcasting: 3\nI0812 11:15:42.454864 1414 log.go:172] (0xc000138630) Reply frame received for 3\nI0812 11:15:42.454888 1414 log.go:172] (0xc000138630) (0xc00070e6e0) Create stream\nI0812 11:15:42.454895 1414 log.go:172] (0xc000138630) (0xc00070e6e0) Stream added, broadcasting: 5\nI0812 11:15:42.455500 1414 log.go:172] (0xc000138630) Reply frame received for 5\nI0812 11:15:42.838637 1414 log.go:172] (0xc000138630) Data frame received for 3\nI0812 11:15:42.838659 1414 log.go:172] (0xc0005c2e60) (3) Data frame handling\nI0812 11:15:42.838676 1414 log.go:172] (0xc0005c2e60) (3) Data frame sent\nI0812 11:15:42.838685 1414 log.go:172] (0xc000138630) Data frame received for 3\nI0812 11:15:42.838691 1414 log.go:172] (0xc0005c2e60) (3) Data frame handling\nI0812 11:15:42.838736 1414 log.go:172] (0xc000138630) Data frame received for 5\nI0812 11:15:42.838746 1414 log.go:172] (0xc00070e6e0) (5) Data frame handling\nI0812 11:15:42.839902 1414 log.go:172] (0xc000138630) Data frame received for 1\nI0812 11:15:42.839917 1414 log.go:172] (0xc00070e640) (1) Data frame handling\nI0812 11:15:42.839931 1414 log.go:172] (0xc00070e640) (1) Data frame sent\nI0812 11:15:42.840067 1414 log.go:172] (0xc000138630) (0xc00070e640) Stream removed, broadcasting: 1\nI0812 11:15:42.840115 1414 log.go:172] (0xc000138630) Go away received\nI0812 11:15:42.840213 1414 log.go:172] (0xc000138630) (0xc00070e640) Stream removed, broadcasting: 1\nI0812 11:15:42.840222 1414 log.go:172] (0xc000138630) (0xc0005c2e60) Stream removed, broadcasting: 3\nI0812 11:15:42.840230 1414 log.go:172] (0xc000138630) (0xc00070e6e0) Stream removed, broadcasting: 5\n" Aug 12 11:15:42.843: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 11:15:42.843: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 11:15:42.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 12 11:15:43.128: INFO: stderr: "I0812 11:15:42.965733 1437 log.go:172] (0xc00015c580) (0xc0005c34a0) Create stream\nI0812 11:15:42.965790 1437 log.go:172] (0xc00015c580) (0xc0005c34a0) Stream added, broadcasting: 1\nI0812 11:15:42.972479 1437 log.go:172] (0xc00015c580) Reply frame received for 1\nI0812 11:15:42.972532 1437 log.go:172] (0xc00015c580) (0xc0006d6000) Create stream\nI0812 11:15:42.972547 1437 log.go:172] (0xc00015c580) (0xc0006d6000) Stream added, broadcasting: 3\nI0812 11:15:42.974863 1437 log.go:172] (0xc00015c580) Reply frame received for 3\nI0812 11:15:42.974890 1437 log.go:172] (0xc00015c580) (0xc0006d60a0) Create stream\nI0812 11:15:42.974897 1437 log.go:172] (0xc00015c580) (0xc0006d60a0) Stream added, broadcasting: 5\nI0812 11:15:42.979249 1437 log.go:172] (0xc00015c580) Reply frame received for 5\nI0812 11:15:43.121643 1437 log.go:172] (0xc00015c580) Data frame received for 3\nI0812 11:15:43.121665 1437 log.go:172] (0xc0006d6000) (3) Data frame handling\nI0812 11:15:43.121686 1437 log.go:172] (0xc0006d6000) (3) Data frame sent\nI0812 11:15:43.121942 1437 log.go:172] (0xc00015c580) Data frame received for 5\nI0812 11:15:43.121959 1437 log.go:172] (0xc0006d60a0) (5) Data frame handling\nI0812 11:15:43.122039 1437 log.go:172] (0xc00015c580) Data frame received for 3\nI0812 11:15:43.122054 1437 log.go:172] (0xc0006d6000) (3) Data frame handling\nI0812 11:15:43.123537 1437 log.go:172] (0xc00015c580) Data frame received for 1\nI0812 11:15:43.123553 1437 log.go:172] (0xc0005c34a0) (1) Data frame handling\nI0812 11:15:43.123562 1437 log.go:172] (0xc0005c34a0) (1) Data frame sent\nI0812 11:15:43.123571 1437 log.go:172] (0xc00015c580) (0xc0005c34a0) Stream removed, broadcasting: 1\nI0812 11:15:43.123613 1437 log.go:172] (0xc00015c580) Go away received\nI0812 11:15:43.123728 1437 log.go:172] (0xc00015c580) (0xc0005c34a0) Stream removed, broadcasting: 1\nI0812 11:15:43.123751 1437 log.go:172] (0xc00015c580) (0xc0006d6000) Stream removed, broadcasting: 3\nI0812 11:15:43.123764 1437 log.go:172] (0xc00015c580) (0xc0006d60a0) Stream removed, broadcasting: 5\n" Aug 12 11:15:43.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 12 11:15:43.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 12 11:15:43.128: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 11:15:43.186: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 12 11:15:53.190: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 12 11:15:53.190: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 12 11:15:53.190: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 12 11:15:53.231: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:53.231: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:53.231: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:53.231: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:53.231: INFO: Aug 12 11:15:53.231: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 12 11:15:54.401: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:54.401: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:54.401: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:54.401: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:54.401: INFO: Aug 12 11:15:54.401: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 12 11:15:55.497: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:55.497: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:55.497: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:55.497: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:55.497: INFO: Aug 12 11:15:55.497: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 12 11:15:56.500: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:56.500: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:56.500: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:56.500: INFO: Aug 12 11:15:56.500: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:15:57.504: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:57.504: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:57.504: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:57.504: INFO: Aug 12 11:15:57.504: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:15:58.508: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:58.508: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:58.508: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:58.508: INFO: Aug 12 11:15:58.508: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:15:59.512: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:15:59.512: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:15:59.512: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:15:59.512: INFO: Aug 12 11:15:59.512: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:16:00.516: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:16:00.516: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:16:00.516: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:16:00.517: INFO: Aug 12 11:16:00.517: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:16:01.528: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:16:01.528: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:16:01.528: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:16:01.528: INFO: Aug 12 11:16:01.528: INFO: StatefulSet ss has not reached scale 0, at 2 Aug 12 11:16:02.533: INFO: POD NODE PHASE GRACE CONDITIONS Aug 12 11:16:02.533: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:00 +0000 UTC }] Aug 12 11:16:02.533: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:15:21 +0000 UTC }] Aug 12 11:16:02.533: INFO: Aug 12 11:16:02.533: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-hh4pq Aug 12 11:16:03.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:03.658: INFO: rc: 1 Aug 12 11:16:03.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001725620 exit status 1 true [0xc0014a8148 0xc0014a8160 0xc0014a8178] [0xc0014a8148 0xc0014a8160 0xc0014a8178] [0xc0014a8158 0xc0014a8170] [0x935700 0x935700] 0xc000834360 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Aug 12 11:16:13.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:13.749: INFO: rc: 1 Aug 12 11:16:13.749: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ebc420 exit status 1 true [0xc001630138 0xc001630150 0xc001630168] [0xc001630138 0xc001630150 0xc001630168] [0xc001630148 0xc001630160] [0x935700 0x935700] 0xc001fd8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:16:23.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:23.841: INFO: rc: 1 Aug 12 11:16:23.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001685140 exit status 1 true [0xc00000f1c0 0xc00000f2f0 0xc00000f320] [0xc00000f1c0 0xc00000f2f0 0xc00000f320] [0xc00000f2a0 0xc00000f308] [0x935700 0x935700] 0xc0018b0360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:16:33.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:33.942: INFO: rc: 1 Aug 12 11:16:33.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001685350 exit status 1 true [0xc00000f330 0xc00000f360 0xc00000f398] [0xc00000f330 0xc00000f360 0xc00000f398] [0xc00000f348 0xc00000f390] [0x935700 0x935700] 0xc0018b0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:16:43.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:44.036: INFO: rc: 1 Aug 12 11:16:44.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016855c0 exit status 1 true [0xc00000f3a8 0xc00000f3e0 0xc00000f418] [0xc00000f3a8 0xc00000f3e0 0xc00000f418] [0xc00000f3c0 0xc00000f410] [0x935700 0x935700] 0xc0018b0900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:16:54.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:16:54.123: INFO: rc: 1 Aug 12 11:16:54.124: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001685770 exit status 1 true [0xc00000f428 0xc00000f448 0xc00000f490] [0xc00000f428 0xc00000f448 0xc00000f490] [0xc00000f440 0xc00000f460] [0x935700 0x935700] 0xc0018b0ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:04.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:04.214: INFO: rc: 1 Aug 12 11:17:04.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016fa8a0 exit status 1 true [0xc001120078 0xc001120090 0xc0011200a8] [0xc001120078 0xc001120090 0xc0011200a8] [0xc001120088 0xc0011200a0] [0x935700 0x935700] 0xc00127fec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:14.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:14.304: INFO: rc: 1 Aug 12 11:17:14.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017257a0 exit status 1 true [0xc0014a8180 0xc0014a8198 0xc0014a81b0] [0xc0014a8180 0xc0014a8198 0xc0014a81b0] [0xc0014a8190 0xc0014a81a8] [0x935700 0x935700] 0xc000835a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:24.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:24.389: INFO: rc: 1 Aug 12 11:17:24.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016faa50 exit status 1 true [0xc0011200b0 0xc0011200c8 0xc0011200e0] [0xc0011200b0 0xc0011200c8 0xc0011200e0] [0xc0011200c0 0xc0011200d8] [0x935700 0x935700] 0xc001fc01e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:34.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:34.484: INFO: rc: 1 Aug 12 11:17:34.484: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019701e0 exit status 1 true [0xc001630008 0xc001630020 0xc001630038] [0xc001630008 0xc001630020 0xc001630038] [0xc001630018 0xc001630030] [0x935700 0x935700] 0xc00127e240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:44.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:44.573: INFO: rc: 1 Aug 12 11:17:44.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000223530 exit status 1 true [0xc0014a8000 0xc0014a8018 0xc0014a8030] [0xc0014a8000 0xc0014a8018 0xc0014a8030] [0xc0014a8010 0xc0014a8028] [0x935700 0x935700] 0xc0015e63c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:17:54.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:17:54.656: INFO: rc: 1 Aug 12 11:17:54.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001970390 exit status 1 true [0xc001630040 0xc001630058 0xc001630070] [0xc001630040 0xc001630058 0xc001630070] [0xc001630050 0xc001630068] [0x935700 0x935700] 0xc00127eae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:04.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:04.740: INFO: rc: 1 Aug 12 11:18:04.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001970540 exit status 1 true [0xc001630078 0xc001630090 0xc0016300a8] [0xc001630078 0xc001630090 0xc0016300a8] [0xc001630088 0xc0016300a0] [0x935700 0x935700] 0xc00127ff20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:14.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:14.830: INFO: rc: 1 Aug 12 11:18:14.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019706f0 exit status 1 true [0xc0016300b0 0xc0016300c8 0xc0016300e0] [0xc0016300b0 0xc0016300c8 0xc0016300e0] [0xc0016300c0 0xc0016300d8] [0x935700 0x935700] 0xc001362c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:24.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:24.914: INFO: rc: 1 Aug 12 11:18:24.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00045b020 exit status 1 true [0xc001120000 0xc001120018 0xc001120030] [0xc001120000 0xc001120018 0xc001120030] [0xc001120010 0xc001120028] [0x935700 0x935700] 0xc001693b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:34.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:34.989: INFO: rc: 1 Aug 12 11:18:34.989: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00045b1a0 exit status 1 true [0xc001120038 0xc001120050 0xc001120068] [0xc001120038 0xc001120050 0xc001120068] [0xc001120048 0xc001120060] [0x935700 0x935700] 0xc001f3d920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:44.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:45.082: INFO: rc: 1 Aug 12 11:18:45.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000223ad0 exit status 1 true [0xc0014a8038 0xc0014a8050 0xc0014a8068] [0xc0014a8038 0xc0014a8050 0xc0014a8068] [0xc0014a8048 0xc0014a8060] [0x935700 0x935700] 0xc0015e6b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:18:55.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:18:55.163: INFO: rc: 1 Aug 12 11:18:55.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004157d0 exit status 1 true [0xc00000e028 0xc00000ed18 0xc00000f050] [0xc00000e028 0xc00000ed18 0xc00000f050] [0xc00000eab0 0xc00000f040] [0x935700 0x935700] 0xc0008355c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:05.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:05.257: INFO: rc: 1 Aug 12 11:19:05.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ebc060 exit status 1 true [0xc0014a8070 0xc0014a8088 0xc0014a80a8] [0xc0014a8070 0xc0014a8088 0xc0014a80a8] [0xc0014a8080 0xc0014a80a0] [0x935700 0x935700] 0xc0015e7b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:15.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:15.340: INFO: rc: 1 Aug 12 11:19:15.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ebc1b0 exit status 1 true [0xc0014a80b0 0xc0014a80c8 0xc0014a80e0] [0xc0014a80b0 0xc0014a80c8 0xc0014a80e0] [0xc0014a80c0 0xc0014a80d8] [0x935700 0x935700] 0xc0015e7f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:25.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:25.426: INFO: rc: 1 Aug 12 11:19:25.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001970870 exit status 1 true [0xc0016300e8 0xc001630100 0xc001630118] [0xc0016300e8 0xc001630100 0xc001630118] [0xc0016300f8 0xc001630110] [0x935700 0x935700] 0xc0013637a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:35.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:35.513: INFO: rc: 1 Aug 12 11:19:35.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017240f0 exit status 1 true [0xc000dee020 0xc000dee050 0xc000dee088] [0xc000dee020 0xc000dee050 0xc000dee088] [0xc000dee048 0xc000dee080] [0x935700 0x935700] 0xc001fc0240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:45.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:45.718: INFO: rc: 1 Aug 12 11:19:45.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000223500 exit status 1 true [0xc001120000 0xc001120018 0xc001120030] [0xc001120000 0xc001120018 0xc001120030] [0xc001120010 0xc001120028] [0x935700 0x935700] 0xc001693b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:19:55.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:19:55.800: INFO: rc: 1 Aug 12 11:19:55.800: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017242a0 exit status 1 true [0xc000dee098 0xc000dee0c0 0xc000dee0f0] [0xc000dee098 0xc000dee0c0 0xc000dee0f0] [0xc000dee0b0 0xc000dee0e8] [0x935700 0x935700] 0xc0015e63c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:05.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:05.880: INFO: rc: 1 Aug 12 11:20:05.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002239b0 exit status 1 true [0xc001120038 0xc001120050 0xc001120068] [0xc001120038 0xc001120050 0xc001120068] [0xc001120048 0xc001120060] [0x935700 0x935700] 0xc00127e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:15.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:15.970: INFO: rc: 1 Aug 12 11:20:15.970: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000415830 exit status 1 true [0xc00000e028 0xc00000ed18 0xc00000f050] [0xc00000e028 0xc00000ed18 0xc00000f050] [0xc00000eab0 0xc00000f040] [0x935700 0x935700] 0xc001f3d500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:25.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:26.049: INFO: rc: 1 Aug 12 11:20:26.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00045b0e0 exit status 1 true [0xc0014a8000 0xc0014a8018 0xc0014a8030] [0xc0014a8000 0xc0014a8018 0xc0014a8030] [0xc0014a8010 0xc0014a8028] [0x935700 0x935700] 0xc001fc0540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:36.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:36.123: INFO: rc: 1 Aug 12 11:20:36.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00045b260 exit status 1 true [0xc0014a8038 0xc0014a8050 0xc0014a8068] [0xc0014a8038 0xc0014a8050 0xc0014a8068] [0xc0014a8048 0xc0014a8060] [0x935700 0x935700] 0xc001fc0900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:46.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:46.545: INFO: rc: 1 Aug 12 11:20:46.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017243f0 exit status 1 true [0xc000dee0f8 0xc000dee128 0xc000dee150] [0xc000dee0f8 0xc000dee128 0xc000dee150] [0xc000dee110 0xc000dee148] [0x935700 0x935700] 0xc0015e6b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:20:56.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:20:56.923: INFO: rc: 1 Aug 12 11:20:56.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ebc000 exit status 1 true [0xc001120070 0xc001120088 0xc0011200a0] [0xc001120070 0xc001120088 0xc0011200a0] [0xc001120080 0xc001120098] [0x935700 0x935700] 0xc00127ec60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 12 11:21:06.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hh4pq ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 12 11:21:07.422: INFO: rc: 1 Aug 12 11:21:07.422: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Aug 12 11:21:07.422: INFO: Scaling statefulset ss to 0 Aug 12 11:21:07.501: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 12 11:21:07.502: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hh4pq Aug 12 11:21:07.504: INFO: Scaling statefulset ss to 0 Aug 12 11:21:07.511: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 11:21:07.512: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:21:07.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hh4pq" for this suite. Aug 12 11:21:15.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:21:15.957: INFO: namespace: e2e-tests-statefulset-hh4pq, resource: bindings, ignored listing per whitelist Aug 12 11:21:15.975: INFO: namespace e2e-tests-statefulset-hh4pq deletion completed in 8.364467308s • [SLOW TEST:375.448 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:21:15.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:21:17.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-gc8zm" to be "success or failure" Aug 12 11:21:17.647: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 550.548412ms Aug 12 11:21:19.651: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.553914949s Aug 12 11:21:21.654: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.55723047s Aug 12 11:21:23.893: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796432097s Aug 12 11:21:25.918: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.821210035s Aug 12 11:21:27.921: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.823997209s STEP: Saw pod success Aug 12 11:21:27.921: INFO: Pod "downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:21:27.944: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:21:28.095: INFO: Waiting for pod downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c to disappear Aug 12 11:21:28.099: INFO: Pod downwardapi-volume-f0606f48-dc8d-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:21:28.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gc8zm" for this suite. Aug 12 11:21:34.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:21:34.148: INFO: namespace: e2e-tests-projected-gc8zm, resource: bindings, ignored listing per whitelist Aug 12 11:21:34.171: INFO: namespace e2e-tests-projected-gc8zm deletion completed in 6.069298866s • [SLOW TEST:18.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:21:34.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Aug 12 11:21:34.319: INFO: Waiting up to 5m0s for pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-containers-h2j9j" to be "success or failure" Aug 12 11:21:34.323: INFO: Pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.940699ms Aug 12 11:21:36.327: INFO: Pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007374023s Aug 12 11:21:38.330: INFO: Pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.010588152s Aug 12 11:21:40.332: INFO: Pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012903097s STEP: Saw pod success Aug 12 11:21:40.332: INFO: Pod "client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:21:40.334: INFO: Trying to get logs from node hunter-worker2 pod client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 11:21:40.638: INFO: Waiting for pod client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c to disappear Aug 12 11:21:40.695: INFO: Pod client-containers-fab86acb-dc8d-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:21:40.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-h2j9j" for this suite. Aug 12 11:21:48.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:21:49.549: INFO: namespace: e2e-tests-containers-h2j9j, resource: bindings, ignored listing per whitelist Aug 12 11:21:49.603: INFO: namespace e2e-tests-containers-h2j9j deletion completed in 8.905136856s • [SLOW TEST:15.432 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:21:49.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-046e3040-dc8e-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:21:50.697: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-xfblt" to be "success or failure" Aug 12 11:21:50.839: INFO: Pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 142.037846ms Aug 12 11:21:52.843: INFO: Pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146080513s Aug 12 11:21:54.989: INFO: Pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291831044s Aug 12 11:21:56.992: INFO: Pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294792352s STEP: Saw pod success Aug 12 11:21:56.992: INFO: Pod "pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:21:56.994: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 12 11:21:57.853: INFO: Waiting for pod pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c to disappear Aug 12 11:21:57.860: INFO: Pod pod-projected-configmaps-046e943a-dc8e-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:21:57.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xfblt" for this suite. Aug 12 11:22:04.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:22:04.347: INFO: namespace: e2e-tests-projected-xfblt, resource: bindings, ignored listing per whitelist Aug 12 11:22:04.387: INFO: namespace e2e-tests-projected-xfblt deletion completed in 6.523804032s • [SLOW TEST:14.783 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:22:04.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:22:08.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-wpfzp" for this suite. Aug 12 11:22:59.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:22:59.682: INFO: namespace: e2e-tests-kubelet-test-wpfzp, resource: bindings, ignored listing per whitelist Aug 12 11:22:59.729: INFO: namespace e2e-tests-kubelet-test-wpfzp deletion completed in 50.902616899s • [SLOW TEST:55.342 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:22:59.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-pld7n STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-pld7n STEP: Deleting pre-stop pod Aug 12 11:23:17.443: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:23:17.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-pld7n" for this suite. Aug 12 11:23:59.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:23:59.979: INFO: namespace: e2e-tests-prestop-pld7n, resource: bindings, ignored listing per whitelist Aug 12 11:24:00.022: INFO: namespace e2e-tests-prestop-pld7n deletion completed in 42.457601777s • [SLOW TEST:60.293 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:24:00.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:25:00.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fzs2l" for this suite. Aug 12 11:25:22.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:25:22.436: INFO: namespace: e2e-tests-container-probe-fzs2l, resource: bindings, ignored listing per whitelist Aug 12 11:25:22.449: INFO: namespace e2e-tests-container-probe-fzs2l deletion completed in 22.216413626s • [SLOW TEST:82.426 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:25:22.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Aug 12 11:25:23.679: INFO: Waiting up to 5m0s for pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29" in namespace "e2e-tests-svcaccounts-wjb6v" to be "success or failure" Aug 12 11:25:24.045: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Pending", Reason="", readiness=false. Elapsed: 365.876192ms Aug 12 11:25:26.049: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369627553s Aug 12 11:25:28.427: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74824561s Aug 12 11:25:30.572: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.892815245s Aug 12 11:25:32.575: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.895616046s Aug 12 11:25:34.578: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.899417665s STEP: Saw pod success Aug 12 11:25:34.578: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29" satisfied condition "success or failure" Aug 12 11:25:34.582: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29 container token-test: STEP: delete the pod Aug 12 11:25:34.621: INFO: Waiting for pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29 to disappear Aug 12 11:25:34.635: INFO: Pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-54x29 no longer exists STEP: Creating a pod to test consume service account root CA Aug 12 11:25:34.641: INFO: Waiting up to 5m0s for pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss" in namespace "e2e-tests-svcaccounts-wjb6v" to be "success or failure" Aug 12 11:25:34.662: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss": Phase="Pending", Reason="", readiness=false. Elapsed: 21.323194ms Aug 12 11:25:36.709: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068180147s Aug 12 11:25:38.714: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073101843s Aug 12 11:25:40.727: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086346155s Aug 12 11:25:42.732: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091214076s STEP: Saw pod success Aug 12 11:25:42.732: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss" satisfied condition "success or failure" Aug 12 11:25:42.734: INFO: Trying to get logs from node hunter-worker pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss container root-ca-test: STEP: delete the pod Aug 12 11:25:42.773: INFO: Waiting for pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss to disappear Aug 12 11:25:42.791: INFO: Pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-987ss no longer exists STEP: Creating a pod to test consume service account namespace Aug 12 11:25:42.794: INFO: Waiting up to 5m0s for pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9" in namespace "e2e-tests-svcaccounts-wjb6v" to be "success or failure" Aug 12 11:25:42.813: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.919326ms Aug 12 11:25:44.815: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02165273s Aug 12 11:25:46.818: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024243307s Aug 12 11:25:48.821: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9": Phase="Running", Reason="", readiness=false. Elapsed: 6.026791869s Aug 12 11:25:50.823: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029283186s STEP: Saw pod success Aug 12 11:25:50.823: INFO: Pod "pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9" satisfied condition "success or failure" Aug 12 11:25:50.825: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9 container namespace-test: STEP: delete the pod Aug 12 11:25:50.878: INFO: Waiting for pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9 to disappear Aug 12 11:25:50.885: INFO: Pod pod-service-account-836e7d42-dc8e-11ea-9b9c-0242ac11000c-mlgd9 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:25:50.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-wjb6v" for this suite. Aug 12 11:26:01.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:26:01.180: INFO: namespace: e2e-tests-svcaccounts-wjb6v, resource: bindings, ignored listing per whitelist Aug 12 11:26:01.195: INFO: namespace e2e-tests-svcaccounts-wjb6v deletion completed in 10.307734526s • [SLOW TEST:38.746 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:26:01.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:26:17.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-tchz9" for this suite. Aug 12 11:26:47.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:26:47.844: INFO: namespace: e2e-tests-replication-controller-tchz9, resource: bindings, ignored listing per whitelist Aug 12 11:26:47.857: INFO: namespace e2e-tests-replication-controller-tchz9 deletion completed in 30.75907987s • [SLOW TEST:46.662 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:26:47.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 12 11:26:47.974: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 12 11:26:47.978: INFO: Waiting for terminating namespaces to be deleted... Aug 12 11:26:47.980: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 12 11:26:47.985: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:26:47.985: INFO: Container kube-proxy ready: true, restart count 0 Aug 12 11:26:47.985: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:26:47.985: INFO: Container kindnet-cni ready: true, restart count 0 Aug 12 11:26:47.985: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 12 11:26:47.989: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:26:47.989: INFO: Container kube-proxy ready: true, restart count 0 Aug 12 11:26:47.989: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:26:47.989: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b81ac239-dc8e-11ea-9b9c-0242ac11000c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b81ac239-dc8e-11ea-9b9c-0242ac11000c off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b81ac239-dc8e-11ea-9b9c-0242ac11000c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:26:56.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-v7rj8" for this suite. Aug 12 11:27:10.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:27:10.374: INFO: namespace: e2e-tests-sched-pred-v7rj8, resource: bindings, ignored listing per whitelist Aug 12 11:27:10.402: INFO: namespace e2e-tests-sched-pred-v7rj8 deletion completed in 14.234645802s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.545 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:27:10.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c34d5346-dc8e-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:27:10.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-q4km8" to be "success or failure" Aug 12 11:27:11.004: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 156.230772ms Aug 12 11:27:13.006: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158571735s Aug 12 11:27:15.380: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533163332s Aug 12 11:27:17.384: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5369161s Aug 12 11:27:19.464: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61720959s Aug 12 11:27:21.468: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.620393273s STEP: Saw pod success Aug 12 11:27:21.468: INFO: Pod "pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:27:21.470: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 11:27:21.789: INFO: Waiting for pod pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c to disappear Aug 12 11:27:21.809: INFO: Pod pod-configmaps-c34dcf6b-dc8e-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:27:21.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-q4km8" for this suite. Aug 12 11:27:27.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:27:27.856: INFO: namespace: e2e-tests-configmap-q4km8, resource: bindings, ignored listing per whitelist Aug 12 11:27:27.888: INFO: namespace e2e-tests-configmap-q4km8 deletion completed in 6.076949288s • [SLOW TEST:17.486 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:27:27.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:27:28.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-thj4l" to be "success or failure" Aug 12 11:27:28.013: INFO: Pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180219ms Aug 12 11:27:30.017: INFO: Pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012690637s Aug 12 11:27:32.019: INFO: Pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015025768s Aug 12 11:27:34.022: INFO: Pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017712394s STEP: Saw pod success Aug 12 11:27:34.022: INFO: Pod "downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:27:34.023: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:27:35.473: INFO: Waiting for pod downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c to disappear Aug 12 11:27:35.701: INFO: Pod downwardapi-volume-cd86c010-dc8e-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:27:35.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-thj4l" for this suite. Aug 12 11:27:43.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:27:43.895: INFO: namespace: e2e-tests-projected-thj4l, resource: bindings, ignored listing per whitelist Aug 12 11:27:43.947: INFO: namespace e2e-tests-projected-thj4l deletion completed in 8.242311355s • [SLOW TEST:16.059 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:27:43.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 12 11:27:55.821: INFO: 8 pods remaining Aug 12 11:27:55.821: INFO: 3 pods has nil DeletionTimestamp Aug 12 11:27:55.821: INFO: Aug 12 11:27:57.566: INFO: 0 pods remaining Aug 12 11:27:57.566: INFO: 0 pods has nil DeletionTimestamp Aug 12 11:27:57.566: INFO: Aug 12 11:27:58.764: INFO: 0 pods remaining Aug 12 11:27:58.764: INFO: 0 pods has nil DeletionTimestamp Aug 12 11:27:58.764: INFO: STEP: Gathering metrics W0812 11:28:01.118943 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 12 11:28:01.118: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:28:01.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gtvlr" for this suite. Aug 12 11:28:08.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:28:08.283: INFO: namespace: e2e-tests-gc-gtvlr, resource: bindings, ignored listing per whitelist Aug 12 11:28:08.293: INFO: namespace e2e-tests-gc-gtvlr deletion completed in 7.171135961s • [SLOW TEST:24.345 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:28:08.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:28:08.420: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Aug 12 11:28:15.198: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:28:15.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xllv2" for this suite.
Aug 12 11:28:21.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:28:21.354: INFO: namespace: e2e-tests-kubectl-xllv2, resource: bindings, ignored listing per whitelist
Aug 12 11:28:21.397: INFO: namespace e2e-tests-kubectl-xllv2 deletion completed in 6.103095019s

• [SLOW TEST:6.345 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:28:21.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0812 11:28:31.493872       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 12 11:28:31.493: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:28:31.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7xxxd" for this suite.
Aug 12 11:28:37.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:28:37.925: INFO: namespace: e2e-tests-gc-7xxxd, resource: bindings, ignored listing per whitelist
Aug 12 11:28:37.947: INFO: namespace e2e-tests-gc-7xxxd deletion completed in 6.451014978s

• [SLOW TEST:16.550 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:28:37.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-q9q8
STEP: Creating a pod to test atomic-volume-subpath
Aug 12 11:28:38.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q9q8" in namespace "e2e-tests-subpath-l5vpx" to be "success or failure"
Aug 12 11:28:38.567: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 114.992494ms
Aug 12 11:28:40.570: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118294492s
Aug 12 11:28:42.573: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120995821s
Aug 12 11:28:44.580: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12783842s
Aug 12 11:28:46.746: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293734465s
Aug 12 11:28:48.826: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.373838395s
Aug 12 11:28:50.829: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.37680576s
Aug 12 11:28:52.832: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.379492174s
Aug 12 11:28:54.836: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.384205905s
Aug 12 11:28:56.981: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 18.529113792s
Aug 12 11:28:58.985: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 20.532935373s
Aug 12 11:29:01.083: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 22.630395712s
Aug 12 11:29:03.086: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 24.633417383s
Aug 12 11:29:05.089: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 26.636894507s
Aug 12 11:29:07.346: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 28.893944561s
Aug 12 11:29:09.349: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 30.897239579s
Aug 12 11:29:11.352: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 32.899818375s
Aug 12 11:29:13.355: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Running", Reason="", readiness=false. Elapsed: 34.90274504s
Aug 12 11:29:15.358: INFO: Pod "pod-subpath-test-configmap-q9q8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.905632346s
STEP: Saw pod success
Aug 12 11:29:15.358: INFO: Pod "pod-subpath-test-configmap-q9q8" satisfied condition "success or failure"
Aug 12 11:29:15.359: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-q9q8 container test-container-subpath-configmap-q9q8: 
STEP: delete the pod
Aug 12 11:29:15.451: INFO: Waiting for pod pod-subpath-test-configmap-q9q8 to disappear
Aug 12 11:29:15.506: INFO: Pod pod-subpath-test-configmap-q9q8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-q9q8
Aug 12 11:29:15.506: INFO: Deleting pod "pod-subpath-test-configmap-q9q8" in namespace "e2e-tests-subpath-l5vpx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:29:15.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-l5vpx" for this suite.
Aug 12 11:29:26.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:29:26.127: INFO: namespace: e2e-tests-subpath-l5vpx, resource: bindings, ignored listing per whitelist
Aug 12 11:29:26.137: INFO: namespace e2e-tests-subpath-l5vpx deletion completed in 10.626237873s

• [SLOW TEST:48.191 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:29:26.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4q2zq
Aug 12 11:29:34.469: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4q2zq
STEP: checking the pod's current state and verifying that restartCount is present
Aug 12 11:29:34.471: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:33:36.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4q2zq" for this suite.
Aug 12 11:33:42.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:33:42.669: INFO: namespace: e2e-tests-container-probe-4q2zq, resource: bindings, ignored listing per whitelist
Aug 12 11:33:42.735: INFO: namespace e2e-tests-container-probe-4q2zq deletion completed in 6.169783961s

• [SLOW TEST:256.597 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:33:42.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-2kkdd/configmap-test-acf8feed-dc8f-11ea-9b9c-0242ac11000c
STEP: Creating a pod to test consume configMaps
Aug 12 11:33:42.891: INFO: Waiting up to 5m0s for pod "pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-2kkdd" to be "success or failure"
Aug 12 11:33:42.900: INFO: Pod "pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.832809ms
Aug 12 11:33:45.044: INFO: Pod "pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153277479s
Aug 12 11:33:47.046: INFO: Pod "pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155194526s
STEP: Saw pod success
Aug 12 11:33:47.046: INFO: Pod "pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 11:33:47.047: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c container env-test: 
STEP: delete the pod
Aug 12 11:33:47.242: INFO: Waiting for pod pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c to disappear
Aug 12 11:33:47.272: INFO: Pod pod-configmaps-acf960ea-dc8f-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:33:47.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2kkdd" for this suite.
Aug 12 11:33:53.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:33:53.330: INFO: namespace: e2e-tests-configmap-2kkdd, resource: bindings, ignored listing per whitelist
Aug 12 11:33:53.345: INFO: namespace e2e-tests-configmap-2kkdd deletion completed in 6.070352039s

• [SLOW TEST:10.610 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:33:53.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-b361f13a-dc8f-11ea-9b9c-0242ac11000c
STEP: Creating a pod to test consume secrets
Aug 12 11:33:53.696: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-54jr9" to be "success or failure"
Aug 12 11:33:53.876: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 180.021225ms
Aug 12 11:33:55.898: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201757695s
Aug 12 11:33:57.901: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204668924s
Aug 12 11:33:59.904: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 6.207863464s
Aug 12 11:34:01.908: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.211383262s
STEP: Saw pod success
Aug 12 11:34:01.908: INFO: Pod "pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 11:34:01.910: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c container secret-volume-test: 
STEP: delete the pod
Aug 12 11:34:01.930: INFO: Waiting for pod pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c to disappear
Aug 12 11:34:01.935: INFO: Pod pod-projected-secrets-b3674eba-dc8f-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:34:01.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-54jr9" for this suite.
Aug 12 11:34:07.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:34:07.998: INFO: namespace: e2e-tests-projected-54jr9, resource: bindings, ignored listing per whitelist
Aug 12 11:34:08.031: INFO: namespace e2e-tests-projected-54jr9 deletion completed in 6.093011914s

• [SLOW TEST:14.686 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:34:08.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 12 11:34:08.219: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:08.221: INFO: Number of nodes with available pods: 0
Aug 12 11:34:08.221: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:09.224: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:09.226: INFO: Number of nodes with available pods: 0
Aug 12 11:34:09.226: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:10.243: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:10.246: INFO: Number of nodes with available pods: 0
Aug 12 11:34:10.246: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:11.428: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:11.559: INFO: Number of nodes with available pods: 0
Aug 12 11:34:11.559: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:12.337: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:12.375: INFO: Number of nodes with available pods: 0
Aug 12 11:34:12.375: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:13.225: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:13.228: INFO: Number of nodes with available pods: 0
Aug 12 11:34:13.228: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:14.304: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:14.580: INFO: Number of nodes with available pods: 0
Aug 12 11:34:14.580: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:15.575: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:15.739: INFO: Number of nodes with available pods: 0
Aug 12 11:34:15.739: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:16.262: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:16.264: INFO: Number of nodes with available pods: 1
Aug 12 11:34:16.264: INFO: Node hunter-worker is running more than one daemon pod
Aug 12 11:34:17.224: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:17.226: INFO: Number of nodes with available pods: 2
Aug 12 11:34:17.226: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 12 11:34:17.257: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:17.259: INFO: Number of nodes with available pods: 1
Aug 12 11:34:17.259: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:18.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:18.268: INFO: Number of nodes with available pods: 1
Aug 12 11:34:18.268: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:19.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:19.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:19.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:20.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:20.271: INFO: Number of nodes with available pods: 1
Aug 12 11:34:20.271: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:21.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:21.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:21.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:22.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:22.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:22.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:23.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:23.267: INFO: Number of nodes with available pods: 1
Aug 12 11:34:23.267: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:24.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:24.266: INFO: Number of nodes with available pods: 1
Aug 12 11:34:24.266: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:25.705: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:25.811: INFO: Number of nodes with available pods: 1
Aug 12 11:34:25.811: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:26.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:26.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:26.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:27.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:27.266: INFO: Number of nodes with available pods: 1
Aug 12 11:34:27.266: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:28.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:28.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:28.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:29.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:29.265: INFO: Number of nodes with available pods: 1
Aug 12 11:34:29.265: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:30.264: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:30.267: INFO: Number of nodes with available pods: 1
Aug 12 11:34:30.267: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 12 11:34:31.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 12 11:34:31.265: INFO: Number of nodes with available pods: 2
Aug 12 11:34:31.265: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vgdls, will wait for the garbage collector to delete the pods
Aug 12 11:34:31.325: INFO: Deleting DaemonSet.extensions daemon-set took: 5.150295ms
Aug 12 11:34:31.425: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.148121ms
Aug 12 11:34:36.345: INFO: Number of nodes with available pods: 0
Aug 12 11:34:36.345: INFO: Number of running nodes: 0, number of available pods: 0
Aug 12 11:34:36.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vgdls/daemonsets","resourceVersion":"5895578"},"items":null}

Aug 12 11:34:36.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vgdls/pods","resourceVersion":"5895578"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:34:36.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vgdls" for this suite.
Aug 12 11:34:44.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:34:44.397: INFO: namespace: e2e-tests-daemonsets-vgdls, resource: bindings, ignored listing per whitelist
Aug 12 11:34:44.434: INFO: namespace e2e-tests-daemonsets-vgdls deletion completed in 8.070511706s

• [SLOW TEST:36.403 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:34:44.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 12 11:34:44.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6qfk8'
Aug 12 11:34:48.154: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 12 11:34:48.154: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug 12 11:34:50.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6qfk8'
Aug 12 11:34:50.282: INFO: stderr: ""
Aug 12 11:34:50.282: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 11:34:50.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6qfk8" for this suite.
Aug 12 11:35:12.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 11:35:12.602: INFO: namespace: e2e-tests-kubectl-6qfk8, resource: bindings, ignored listing per whitelist
Aug 12 11:35:12.630: INFO: namespace e2e-tests-kubectl-6qfk8 deletion completed in 22.344559158s

• [SLOW TEST:28.195 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 11:35:12.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Aug 12 11:35:12.704: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 12 11:35:12.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:12.970: INFO: stderr: ""
Aug 12 11:35:12.970: INFO: stdout: "service/redis-slave created\n"
Aug 12 11:35:12.970: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 12 11:35:12.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:13.231: INFO: stderr: ""
Aug 12 11:35:13.231: INFO: stdout: "service/redis-master created\n"
Aug 12 11:35:13.231: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 12 11:35:13.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:13.489: INFO: stderr: ""
Aug 12 11:35:13.489: INFO: stdout: "service/frontend created\n"
Aug 12 11:35:13.489: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 12 11:35:13.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:13.736: INFO: stderr: ""
Aug 12 11:35:13.736: INFO: stdout: "deployment.extensions/frontend created\n"
Aug 12 11:35:13.736: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 12 11:35:13.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:14.615: INFO: stderr: ""
Aug 12 11:35:14.615: INFO: stdout: "deployment.extensions/redis-master created\n"
Aug 12 11:35:14.616: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 12 11:35:14.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f28nt'
Aug 12 11:35:15.518: INFO: stderr: ""
Aug 12 11:35:15.518: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Aug 12 11:35:15.518: INFO: Waiting for all frontend pods to be Running.
Aug 12 11:35:30.569: INFO: Waiting for frontend to serve content.
Aug 12 11:35:32.147: INFO: Failed to get response from guestbook. err: , response: 
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Aug 12 11:35:37.160: INFO: Trying to add a new entry to the guestbook. Aug 12 11:35:37.213: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 12 11:35:37.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:37.962: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:37.962: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 12 11:35:37.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:38.270: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:38.270: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 12 11:35:38.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:38.657: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:38.657: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 12 11:35:38.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:38.804: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:38.804: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 12 11:35:38.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:39.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:39.363: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 12 11:35:39.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f28nt' Aug 12 11:35:39.899: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:35:39.899: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:35:39.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f28nt" for this suite. Aug 12 11:36:18.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:36:18.584: INFO: namespace: e2e-tests-kubectl-f28nt, resource: bindings, ignored listing per whitelist Aug 12 11:36:18.587: INFO: namespace e2e-tests-kubectl-f28nt deletion completed in 38.498828418s • [SLOW TEST:65.957 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:36:18.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 12 11:36:18.753: INFO: Waiting up to 5m0s for pod "downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-rfhbf" to be "success or failure" Aug 12 11:36:18.771: INFO: Pod "downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.860275ms Aug 12 11:36:20.775: INFO: Pod "downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021953393s Aug 12 11:36:22.778: INFO: Pod "downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025533988s STEP: Saw pod success Aug 12 11:36:22.778: INFO: Pod "downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:36:22.780: INFO: Trying to get logs from node hunter-worker2 pod downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 11:36:23.057: INFO: Waiting for pod downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:36:23.097: INFO: Pod downward-api-09e2d631-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:36:23.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rfhbf" for this suite. Aug 12 11:36:29.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:36:29.310: INFO: namespace: e2e-tests-downward-api-rfhbf, resource: bindings, ignored listing per whitelist Aug 12 11:36:29.355: INFO: namespace e2e-tests-downward-api-rfhbf deletion completed in 6.255380474s • [SLOW TEST:10.767 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:36:29.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 11:36:29.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ld75p' Aug 12 11:36:29.531: INFO: stderr: "" Aug 12 11:36:29.531: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 12 11:36:34.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ld75p -o json' Aug 12 11:36:34.671: INFO: stderr: "" Aug 12 11:36:34.671: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-12T11:36:29Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-ld75p\",\n \"resourceVersion\": \"5896075\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-ld75p/pods/e2e-test-nginx-pod\",\n \"uid\": \"104e6237-dc90-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dqvc6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dqvc6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dqvc6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-12T11:36:29Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-12T11:36:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-12T11:36:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-12T11:36:29Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a73e5f77576107cdeac06ce2cb2b0069a016b27b21613b7a8d8dbb87a3802379\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-12T11:36:32Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.169\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-12T11:36:29Z\"\n }\n}\n" STEP: replace the image in the pod Aug 12 11:36:34.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-ld75p' Aug 12 11:36:34.925: INFO: stderr: "" Aug 12 11:36:34.925: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Aug 12 11:36:34.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ld75p' Aug 12 11:36:39.673: INFO: stderr: "" Aug 12 11:36:39.673: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:36:39.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ld75p" for this suite. Aug 12 11:36:45.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:36:45.773: INFO: namespace: e2e-tests-kubectl-ld75p, resource: bindings, ignored listing per whitelist Aug 12 11:36:45.822: INFO: namespace e2e-tests-kubectl-ld75p deletion completed in 6.147350705s • [SLOW TEST:16.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:36:45.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-ftwr STEP: Creating a pod to test atomic-volume-subpath Aug 12 11:36:45.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ftwr" in namespace "e2e-tests-subpath-2w74x" to be "success or failure" Aug 12 11:36:45.975: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.404374ms Aug 12 11:36:47.977: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015104017s Aug 12 11:36:49.981: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018738252s Aug 12 11:36:52.112: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149305924s Aug 12 11:36:54.115: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 8.153011453s Aug 12 11:36:56.118: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 10.156127654s Aug 12 11:36:58.122: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 12.159556238s Aug 12 11:37:00.125: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 14.162220787s Aug 12 11:37:02.127: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 16.164633462s Aug 12 11:37:04.130: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 18.167958143s Aug 12 11:37:06.138: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 20.175410592s Aug 12 11:37:08.142: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 22.179291857s Aug 12 11:37:10.145: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Running", Reason="", readiness=false. Elapsed: 24.183055761s Aug 12 11:37:12.149: INFO: Pod "pod-subpath-test-downwardapi-ftwr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.186364353s STEP: Saw pod success Aug 12 11:37:12.149: INFO: Pod "pod-subpath-test-downwardapi-ftwr" satisfied condition "success or failure" Aug 12 11:37:12.151: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-ftwr container test-container-subpath-downwardapi-ftwr: STEP: delete the pod Aug 12 11:37:12.179: INFO: Waiting for pod pod-subpath-test-downwardapi-ftwr to disappear Aug 12 11:37:12.207: INFO: Pod pod-subpath-test-downwardapi-ftwr no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ftwr Aug 12 11:37:12.207: INFO: Deleting pod "pod-subpath-test-downwardapi-ftwr" in namespace "e2e-tests-subpath-2w74x" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:37:12.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-2w74x" for this suite. Aug 12 11:37:18.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:37:18.269: INFO: namespace: e2e-tests-subpath-2w74x, resource: bindings, ignored listing per whitelist Aug 12 11:37:18.291: INFO: namespace e2e-tests-subpath-2w74x deletion completed in 6.07840555s • [SLOW TEST:32.469 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:37:18.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 12 11:37:18.391: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qwrwn,SelfLink:/api/v1/namespaces/e2e-tests-watch-qwrwn/configmaps/e2e-watch-test-watch-closed,UID:2d69826d-dc90-11ea-b2c9-0242ac120008,ResourceVersion:5896227,Generation:0,CreationTimestamp:2020-08-12 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 12 11:37:18.391: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qwrwn,SelfLink:/api/v1/namespaces/e2e-tests-watch-qwrwn/configmaps/e2e-watch-test-watch-closed,UID:2d69826d-dc90-11ea-b2c9-0242ac120008,ResourceVersion:5896228,Generation:0,CreationTimestamp:2020-08-12 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 12 11:37:18.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qwrwn,SelfLink:/api/v1/namespaces/e2e-tests-watch-qwrwn/configmaps/e2e-watch-test-watch-closed,UID:2d69826d-dc90-11ea-b2c9-0242ac120008,ResourceVersion:5896229,Generation:0,CreationTimestamp:2020-08-12 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 12 11:37:18.401: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-qwrwn,SelfLink:/api/v1/namespaces/e2e-tests-watch-qwrwn/configmaps/e2e-watch-test-watch-closed,UID:2d69826d-dc90-11ea-b2c9-0242ac120008,ResourceVersion:5896230,Generation:0,CreationTimestamp:2020-08-12 11:37:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:37:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-qwrwn" for this suite. Aug 12 11:37:24.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:37:24.458: INFO: namespace: e2e-tests-watch-qwrwn, resource: bindings, ignored listing per whitelist Aug 12 11:37:24.483: INFO: namespace e2e-tests-watch-qwrwn deletion completed in 6.07619311s • [SLOW TEST:6.192 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:37:24.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 12 11:37:24.688: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 12 11:37:24.693: INFO: Waiting for terminating namespaces to be deleted... Aug 12 11:37:24.695: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 12 11:37:24.697: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:37:24.698: INFO: Container kube-proxy ready: true, restart count 0 Aug 12 11:37:24.698: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:37:24.698: INFO: Container kindnet-cni ready: true, restart count 0 Aug 12 11:37:24.698: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 12 11:37:24.729: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:37:24.729: INFO: Container kindnet-cni ready: true, restart count 0 Aug 12 11:37:24.729: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:37:24.729: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Aug 12 11:37:24.927: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker Aug 12 11:37:24.927: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2 Aug 12 11:37:24.927: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker Aug 12 11:37:24.927: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3154f9a9-dc90-11ea-9b9c-0242ac11000c.162a8243a913e804], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-h8t2h/filler-pod-3154f9a9-dc90-11ea-9b9c-0242ac11000c to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3154f9a9-dc90-11ea-9b9c-0242ac11000c.162a8243fb5624c1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3154f9a9-dc90-11ea-9b9c-0242ac11000c.162a8244cebb4ead], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-3154f9a9-dc90-11ea-9b9c-0242ac11000c.162a8244e4ca5c29], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-3155af60-dc90-11ea-9b9c-0242ac11000c.162a8243b15d62e7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-h8t2h/filler-pod-3155af60-dc90-11ea-9b9c-0242ac11000c to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3155af60-dc90-11ea-9b9c-0242ac11000c.162a82446aa7a981], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3155af60-dc90-11ea-9b9c-0242ac11000c.162a8244ed61ec01], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-3155af60-dc90-11ea-9b9c-0242ac11000c.162a8244ff8a7bc5], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.162a824591969712], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:37:34.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-h8t2h" for this suite. Aug 12 11:37:51.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:37:51.290: INFO: namespace: e2e-tests-sched-pred-h8t2h, resource: bindings, ignored listing per whitelist Aug 12 11:37:51.300: INFO: namespace e2e-tests-sched-pred-h8t2h deletion completed in 16.875370605s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:26.816 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:37:51.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-qpl2n STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qpl2n to expose endpoints map[] Aug 12 11:37:53.436: INFO: Get endpoints failed (351.599325ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Aug 12 11:37:54.438: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qpl2n exposes endpoints map[] (1.354103946s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-qpl2n STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qpl2n to expose endpoints map[pod1:[100]] Aug 12 11:37:58.782: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.338508131s elapsed, will retry) Aug 12 11:38:00.793: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qpl2n exposes endpoints map[pod1:[100]] (6.349694742s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-qpl2n STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qpl2n to expose endpoints map[pod1:[100] pod2:[101]] Aug 12 11:38:05.187: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qpl2n exposes endpoints map[pod1:[100] pod2:[101]] (4.390530829s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-qpl2n STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qpl2n to expose endpoints map[pod2:[101]] Aug 12 11:38:06.288: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qpl2n exposes endpoints map[pod2:[101]] (1.097757604s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-qpl2n STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-qpl2n to expose endpoints map[] Aug 12 11:38:07.322: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-qpl2n exposes endpoints map[] (1.030846853s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:38:07.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-qpl2n" for this suite. Aug 12 11:38:13.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:38:14.045: INFO: namespace: e2e-tests-services-qpl2n, resource: bindings, ignored listing per whitelist Aug 12 11:38:14.049: INFO: namespace e2e-tests-services-qpl2n deletion completed in 6.391823842s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:22.749 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:38:14.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-dclcg;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-dclcg.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dclcg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.79.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.79.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.79.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.79.204_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-dclcg;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-dclcg;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-dclcg.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-dclcg.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-dclcg.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-dclcg.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dclcg.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 204.79.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.79.204_udp@PTR;check="$$(dig +tcp +noall +answer +search 204.79.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.79.204_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 12 11:38:22.318: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.322: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.328: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.330: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.331: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.387: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.389: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.390: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.392: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.394: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.395: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.397: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.399: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:22.411: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:27.415: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.418: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.424: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.426: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.427: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.440: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.441: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.443: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.445: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.447: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.449: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.451: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.452: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:27.466: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:32.473: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.486: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.492: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.494: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.496: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.512: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.514: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.517: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.519: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.522: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.524: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.526: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.528: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:32.606: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:37.415: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.418: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.425: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.427: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.429: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.445: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.448: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.450: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.452: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.455: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.457: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.463: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:37.478: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:42.415: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.437: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.445: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.447: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.449: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.466: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.469: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.471: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.473: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.475: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.478: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.482: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:42.496: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:47.415: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.418: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.425: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.428: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.449: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.452: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.454: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.457: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.459: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.462: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.465: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.468: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc from pod e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c: the server could not find the requested resource (get pods dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c) Aug 12 11:38:47.483: INFO: Lookups using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_tcp@dns-test-service.e2e-tests-dns-dclcg.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-dclcg jessie_tcp@dns-test-service.e2e-tests-dns-dclcg jessie_udp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@dns-test-service.e2e-tests-dns-dclcg.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-dclcg.svc] Aug 12 11:38:53.524: INFO: DNS probes using e2e-tests-dns-dclcg/dns-test-4eb3f43f-dc90-11ea-9b9c-0242ac11000c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:38:54.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-dclcg" for this suite. Aug 12 11:39:00.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:39:01.099: INFO: namespace: e2e-tests-dns-dclcg, resource: bindings, ignored listing per whitelist Aug 12 11:39:01.121: INFO: namespace e2e-tests-dns-dclcg deletion completed in 6.222646599s • [SLOW TEST:47.072 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:39:01.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 12 11:39:01.248: INFO: Waiting up to 5m0s for pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-79bdh" to be "success or failure" Aug 12 11:39:01.271: INFO: Pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.253073ms Aug 12 11:39:03.274: INFO: Pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026597922s Aug 12 11:39:05.353: INFO: Pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10550505s Aug 12 11:39:07.356: INFO: Pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10878182s STEP: Saw pod success Aug 12 11:39:07.356: INFO: Pod "pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:39:07.359: INFO: Trying to get logs from node hunter-worker pod pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 11:39:07.379: INFO: Waiting for pod pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:39:07.390: INFO: Pod pod-6abdaa38-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:39:07.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-79bdh" for this suite. Aug 12 11:39:13.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:39:13.470: INFO: namespace: e2e-tests-emptydir-79bdh, resource: bindings, ignored listing per whitelist Aug 12 11:39:13.503: INFO: namespace e2e-tests-emptydir-79bdh deletion completed in 6.110062345s • [SLOW TEST:12.381 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:39:13.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:39:13.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-9glj4" to be "success or failure" Aug 12 11:39:13.869: INFO: Pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.717238ms Aug 12 11:39:15.934: INFO: Pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075969278s Aug 12 11:39:17.970: INFO: Pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111484002s Aug 12 11:39:19.973: INFO: Pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114602911s STEP: Saw pod success Aug 12 11:39:19.973: INFO: Pod "downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:39:19.975: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:39:20.097: INFO: Waiting for pod downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:39:20.107: INFO: Pod downwardapi-volume-7239519d-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:39:20.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9glj4" for this suite. Aug 12 11:39:26.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:39:26.141: INFO: namespace: e2e-tests-projected-9glj4, resource: bindings, ignored listing per whitelist Aug 12 11:39:26.180: INFO: namespace e2e-tests-projected-9glj4 deletion completed in 6.070581824s • [SLOW TEST:12.677 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:39:26.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 12 11:39:26.293: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 12 11:39:31.296: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:39:32.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cp9nf" for this suite. Aug 12 11:39:40.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:39:40.859: INFO: namespace: e2e-tests-replication-controller-cp9nf, resource: bindings, ignored listing per whitelist Aug 12 11:39:40.902: INFO: namespace e2e-tests-replication-controller-cp9nf deletion completed in 8.179885569s • [SLOW TEST:14.722 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:39:40.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-82a00b54-dc90-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 11:39:41.466: INFO: Waiting up to 5m0s for pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-cr8ch" to be "success or failure" Aug 12 11:39:41.510: INFO: Pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.959101ms Aug 12 11:39:43.622: INFO: Pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156177486s Aug 12 11:39:45.625: INFO: Pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158745382s Aug 12 11:39:47.758: INFO: Pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.291693762s STEP: Saw pod success Aug 12 11:39:47.758: INFO: Pod "pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:39:47.760: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 11:39:47.822: INFO: Waiting for pod pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:39:47.862: INFO: Pod pod-secrets-82b667f6-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:39:47.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cr8ch" for this suite. Aug 12 11:39:54.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:39:54.158: INFO: namespace: e2e-tests-secrets-cr8ch, resource: bindings, ignored listing per whitelist Aug 12 11:39:54.160: INFO: namespace e2e-tests-secrets-cr8ch deletion completed in 6.293281899s • [SLOW TEST:13.257 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:39:54.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Aug 12 11:39:54.386: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 12 11:39:54.394: INFO: Waiting for terminating namespaces to be deleted... Aug 12 11:39:54.396: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Aug 12 11:39:54.399: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:39:54.399: INFO: Container kube-proxy ready: true, restart count 0 Aug 12 11:39:54.399: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:39:54.399: INFO: Container kindnet-cni ready: true, restart count 0 Aug 12 11:39:54.399: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Aug 12 11:39:54.402: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded) Aug 12 11:39:54.402: INFO: Container kindnet-cni ready: true, restart count 0 Aug 12 11:39:54.402: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded) Aug 12 11:39:54.402: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162a826674bd4642], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:39:55.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-t9z98" for this suite. Aug 12 11:40:01.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:40:01.478: INFO: namespace: e2e-tests-sched-pred-t9z98, resource: bindings, ignored listing per whitelist Aug 12 11:40:01.497: INFO: namespace e2e-tests-sched-pred-t9z98 deletion completed in 6.078454826s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.337 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:40:01.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 12 11:40:01.597: INFO: Waiting up to 5m0s for pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-rxmtq" to be "success or failure" Aug 12 11:40:01.601: INFO: Pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177844ms Aug 12 11:40:03.604: INFO: Pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007336893s Aug 12 11:40:05.607: INFO: Pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.010248089s Aug 12 11:40:07.610: INFO: Pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012930829s STEP: Saw pod success Aug 12 11:40:07.610: INFO: Pod "pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:40:07.611: INFO: Trying to get logs from node hunter-worker pod pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 11:40:07.630: INFO: Waiting for pod pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:40:07.641: INFO: Pod pod-8eb4c7e9-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:40:07.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rxmtq" for this suite. Aug 12 11:40:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:40:14.035: INFO: namespace: e2e-tests-emptydir-rxmtq, resource: bindings, ignored listing per whitelist Aug 12 11:40:14.042: INFO: namespace e2e-tests-emptydir-rxmtq deletion completed in 6.398308189s • [SLOW TEST:12.545 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:40:14.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-tzcg2/configmap-test-96691b03-dc90-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:40:14.698: INFO: Waiting up to 5m0s for pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-tzcg2" to be "success or failure" Aug 12 11:40:14.934: INFO: Pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 236.776891ms Aug 12 11:40:16.938: INFO: Pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240177491s Aug 12 11:40:18.941: INFO: Pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243294128s Aug 12 11:40:20.945: INFO: Pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246986627s STEP: Saw pod success Aug 12 11:40:20.945: INFO: Pod "pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:40:20.947: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c container env-test: STEP: delete the pod Aug 12 11:40:21.416: INFO: Waiting for pod pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:40:21.803: INFO: Pod pod-configmaps-9683a275-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:40:21.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tzcg2" for this suite. Aug 12 11:40:30.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:40:30.169: INFO: namespace: e2e-tests-configmap-tzcg2, resource: bindings, ignored listing per whitelist Aug 12 11:40:30.204: INFO: namespace e2e-tests-configmap-tzcg2 deletion completed in 8.398573269s • [SLOW TEST:16.162 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:40:30.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a0121643-dc90-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:40:31.330: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-nql5j" to be "success or failure" Aug 12 11:40:31.374: INFO: Pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.963531ms Aug 12 11:40:33.377: INFO: Pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046987882s Aug 12 11:40:35.432: INFO: Pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101255614s Aug 12 11:40:37.435: INFO: Pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10424871s STEP: Saw pod success Aug 12 11:40:37.435: INFO: Pod "pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:40:37.436: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 12 11:40:37.469: INFO: Waiting for pod pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:40:37.509: INFO: Pod pod-projected-configmaps-a015b45b-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:40:37.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nql5j" for this suite. Aug 12 11:40:43.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:40:43.564: INFO: namespace: e2e-tests-projected-nql5j, resource: bindings, ignored listing per whitelist Aug 12 11:40:43.604: INFO: namespace e2e-tests-projected-nql5j deletion completed in 6.091417525s • [SLOW TEST:13.399 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:40:43.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9ndjz STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 12 11:40:43.803: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 12 11:41:14.073: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.125 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9ndjz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:41:14.073: INFO: >>> kubeConfig: /root/.kube/config I0812 11:41:14.100456 6 log.go:172] (0xc00094f550) (0xc001f22c80) Create stream I0812 11:41:14.100482 6 log.go:172] (0xc00094f550) (0xc001f22c80) Stream added, broadcasting: 1 I0812 11:41:14.101934 6 log.go:172] (0xc00094f550) Reply frame received for 1 I0812 11:41:14.101960 6 log.go:172] (0xc00094f550) (0xc00029e140) Create stream I0812 11:41:14.101968 6 log.go:172] (0xc00094f550) (0xc00029e140) Stream added, broadcasting: 3 I0812 11:41:14.102673 6 log.go:172] (0xc00094f550) Reply frame received for 3 I0812 11:41:14.102707 6 log.go:172] (0xc00094f550) (0xc001e88c80) Create stream I0812 11:41:14.102724 6 log.go:172] (0xc00094f550) (0xc001e88c80) Stream added, broadcasting: 5 I0812 11:41:14.103357 6 log.go:172] (0xc00094f550) Reply frame received for 5 I0812 11:41:15.165997 6 log.go:172] (0xc00094f550) Data frame received for 5 I0812 11:41:15.166067 6 log.go:172] (0xc001e88c80) (5) Data frame handling I0812 11:41:15.166136 6 log.go:172] (0xc00094f550) Data frame received for 3 I0812 11:41:15.166183 6 log.go:172] (0xc00029e140) (3) Data frame handling I0812 11:41:15.166214 6 log.go:172] (0xc00029e140) (3) Data frame sent I0812 11:41:15.166399 6 log.go:172] (0xc00094f550) Data frame received for 3 I0812 11:41:15.166466 6 log.go:172] (0xc00029e140) (3) Data frame handling I0812 11:41:15.168698 6 log.go:172] (0xc00094f550) Data frame received for 1 I0812 11:41:15.168936 6 log.go:172] (0xc001f22c80) (1) Data frame handling I0812 11:41:15.168981 6 log.go:172] (0xc001f22c80) (1) Data frame sent I0812 11:41:15.169055 6 log.go:172] (0xc00094f550) (0xc001f22c80) Stream removed, broadcasting: 1 I0812 11:41:15.169107 6 log.go:172] (0xc00094f550) Go away received I0812 11:41:15.169417 6 log.go:172] (0xc00094f550) (0xc001f22c80) Stream removed, broadcasting: 1 I0812 11:41:15.169457 6 log.go:172] (0xc00094f550) (0xc00029e140) Stream removed, broadcasting: 3 I0812 11:41:15.169485 6 log.go:172] (0xc00094f550) (0xc001e88c80) Stream removed, broadcasting: 5 Aug 12 11:41:15.169: INFO: Found all expected endpoints: [netserver-0] Aug 12 11:41:15.173: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.178 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9ndjz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:41:15.173: INFO: >>> kubeConfig: /root/.kube/config I0812 11:41:15.206819 6 log.go:172] (0xc00094fad0) (0xc001f22e60) Create stream I0812 11:41:15.206849 6 log.go:172] (0xc00094fad0) (0xc001f22e60) Stream added, broadcasting: 1 I0812 11:41:15.209358 6 log.go:172] (0xc00094fad0) Reply frame received for 1 I0812 11:41:15.209399 6 log.go:172] (0xc00094fad0) (0xc001365860) Create stream I0812 11:41:15.209417 6 log.go:172] (0xc00094fad0) (0xc001365860) Stream added, broadcasting: 3 I0812 11:41:15.210211 6 log.go:172] (0xc00094fad0) Reply frame received for 3 I0812 11:41:15.210248 6 log.go:172] (0xc00094fad0) (0xc001e89180) Create stream I0812 11:41:15.210259 6 log.go:172] (0xc00094fad0) (0xc001e89180) Stream added, broadcasting: 5 I0812 11:41:15.211003 6 log.go:172] (0xc00094fad0) Reply frame received for 5 I0812 11:41:16.295161 6 log.go:172] (0xc00094fad0) Data frame received for 5 I0812 11:41:16.295186 6 log.go:172] (0xc001e89180) (5) Data frame handling I0812 11:41:16.295224 6 log.go:172] (0xc00094fad0) Data frame received for 3 I0812 11:41:16.295263 6 log.go:172] (0xc001365860) (3) Data frame handling I0812 11:41:16.295284 6 log.go:172] (0xc001365860) (3) Data frame sent I0812 11:41:16.295300 6 log.go:172] (0xc00094fad0) Data frame received for 3 I0812 11:41:16.295314 6 log.go:172] (0xc001365860) (3) Data frame handling I0812 11:41:16.298820 6 log.go:172] (0xc00094fad0) Data frame received for 1 I0812 11:41:16.298851 6 log.go:172] (0xc001f22e60) (1) Data frame handling I0812 11:41:16.298872 6 log.go:172] (0xc001f22e60) (1) Data frame sent I0812 11:41:16.298910 6 log.go:172] (0xc00094fad0) (0xc001f22e60) Stream removed, broadcasting: 1 I0812 11:41:16.298944 6 log.go:172] (0xc00094fad0) Go away received I0812 11:41:16.299076 6 log.go:172] (0xc00094fad0) (0xc001f22e60) Stream removed, broadcasting: 1 I0812 11:41:16.299102 6 log.go:172] (0xc00094fad0) (0xc001365860) Stream removed, broadcasting: 3 I0812 11:41:16.299118 6 log.go:172] (0xc00094fad0) (0xc001e89180) Stream removed, broadcasting: 5 Aug 12 11:41:16.299: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:41:16.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9ndjz" for this suite. Aug 12 11:41:30.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:41:30.347: INFO: namespace: e2e-tests-pod-network-test-9ndjz, resource: bindings, ignored listing per whitelist Aug 12 11:41:30.393: INFO: namespace e2e-tests-pod-network-test-9ndjz deletion completed in 14.090093384s • [SLOW TEST:46.789 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:41:30.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:41:30.509: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-fcvll" to be "success or failure" Aug 12 11:41:30.513: INFO: Pod "downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004133ms Aug 12 11:41:32.518: INFO: Pod "downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126706s Aug 12 11:41:34.534: INFO: Pod "downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025411234s STEP: Saw pod success Aug 12 11:41:34.534: INFO: Pod "downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:41:34.537: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:41:34.589: INFO: Waiting for pod downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c to disappear Aug 12 11:41:34.592: INFO: Pod downwardapi-volume-c3b3967e-dc90-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:41:34.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fcvll" for this suite. Aug 12 11:41:40.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:41:40.661: INFO: namespace: e2e-tests-downward-api-fcvll, resource: bindings, ignored listing per whitelist Aug 12 11:41:40.675: INFO: namespace e2e-tests-downward-api-fcvll deletion completed in 6.078153662s • [SLOW TEST:10.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:41:40.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-frzb7 Aug 12 11:41:48.886: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-frzb7 STEP: checking the pod's current state and verifying that restartCount is present Aug 12 11:41:48.889: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:45:49.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-frzb7" for this suite. Aug 12 11:45:55.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:45:55.072: INFO: namespace: e2e-tests-container-probe-frzb7, resource: bindings, ignored listing per whitelist Aug 12 11:45:55.169: INFO: namespace e2e-tests-container-probe-frzb7 deletion completed in 6.145025033s • [SLOW TEST:254.493 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:45:55.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:45:55.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h9nzl" for this suite. Aug 12 11:46:01.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:46:01.664: INFO: namespace: e2e-tests-kubelet-test-h9nzl, resource: bindings, ignored listing per whitelist Aug 12 11:46:01.703: INFO: namespace e2e-tests-kubelet-test-h9nzl deletion completed in 6.231468646s • [SLOW TEST:6.534 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:46:01.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hrltk [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-hrltk STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-hrltk STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-hrltk STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-hrltk Aug 12 11:46:08.760: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrltk, name: ss-0, uid: 696ee88e-dc91-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete. Aug 12 11:46:08.784: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrltk, name: ss-0, uid: 696ee88e-dc91-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete. Aug 12 11:46:17.544: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrltk, name: ss-0, uid: 696ee88e-dc91-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete. Aug 12 11:46:17.660: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-hrltk, name: ss-0, uid: 696ee88e-dc91-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete. Aug 12 11:46:17.676: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-hrltk STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-hrltk STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-hrltk and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 12 11:46:23.874: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hrltk Aug 12 11:46:23.877: INFO: Scaling statefulset ss to 0 Aug 12 11:46:33.927: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 11:46:33.931: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:46:33.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hrltk" for this suite. Aug 12 11:46:41.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:46:42.038: INFO: namespace: e2e-tests-statefulset-hrltk, resource: bindings, ignored listing per whitelist Aug 12 11:46:42.085: INFO: namespace e2e-tests-statefulset-hrltk deletion completed in 8.133705774s • [SLOW TEST:40.382 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:46:42.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 12 11:46:42.254: INFO: Waiting up to 5m0s for pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-qqjnb" to be "success or failure" Aug 12 11:46:42.263: INFO: Pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.910619ms Aug 12 11:46:44.296: INFO: Pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041963852s Aug 12 11:46:46.301: INFO: Pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.046672443s Aug 12 11:46:48.304: INFO: Pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050445905s STEP: Saw pod success Aug 12 11:46:48.304: INFO: Pod "downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:46:48.307: INFO: Trying to get logs from node hunter-worker2 pod downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 11:46:48.367: INFO: Waiting for pod downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:46:48.463: INFO: Pod downward-api-7d7f9ac6-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:46:48.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qqjnb" for this suite. Aug 12 11:46:54.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:46:54.527: INFO: namespace: e2e-tests-downward-api-qqjnb, resource: bindings, ignored listing per whitelist Aug 12 11:46:54.556: INFO: namespace e2e-tests-downward-api-qqjnb deletion completed in 6.089266468s • [SLOW TEST:12.470 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:46:54.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 12 11:46:59.797: INFO: Successfully updated pod "labelsupdate852469ba-dc91-11ea-9b9c-0242ac11000c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:47:01.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rhpn9" for this suite. Aug 12 11:47:23.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:47:24.015: INFO: namespace: e2e-tests-downward-api-rhpn9, resource: bindings, ignored listing per whitelist Aug 12 11:47:24.041: INFO: namespace e2e-tests-downward-api-rhpn9 deletion completed in 22.217602233s • [SLOW TEST:29.485 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:47:24.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Aug 12 11:47:24.142: INFO: Waiting up to 5m0s for pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-var-expansion-5vqdz" to be "success or failure" Aug 12 11:47:24.189: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.884644ms Aug 12 11:47:26.191: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049417764s Aug 12 11:47:28.484: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342783395s Aug 12 11:47:30.489: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347106813s Aug 12 11:47:32.492: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.350534448s STEP: Saw pod success Aug 12 11:47:32.492: INFO: Pod "var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:47:32.495: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 11:47:32.527: INFO: Waiting for pod var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:47:32.577: INFO: Pod var-expansion-9679d7d7-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:47:32.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-5vqdz" for this suite. Aug 12 11:47:38.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:47:38.618: INFO: namespace: e2e-tests-var-expansion-5vqdz, resource: bindings, ignored listing per whitelist Aug 12 11:47:38.690: INFO: namespace e2e-tests-var-expansion-5vqdz deletion completed in 6.109151581s • [SLOW TEST:14.648 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:47:38.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Aug 12 11:47:38.818: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-hcbfr" to be "success or failure" Aug 12 11:47:38.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.631553ms Aug 12 11:47:40.826: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007700178s Aug 12 11:47:42.967: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148923328s Aug 12 11:47:44.970: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151860931s Aug 12 11:47:46.974: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156339171s STEP: Saw pod success Aug 12 11:47:46.975: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 12 11:47:46.978: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 12 11:47:47.311: INFO: Waiting for pod pod-host-path-test to disappear Aug 12 11:47:47.534: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:47:47.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-hcbfr" for this suite. Aug 12 11:47:53.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:47:53.615: INFO: namespace: e2e-tests-hostpath-hcbfr, resource: bindings, ignored listing per whitelist Aug 12 11:47:53.654: INFO: namespace e2e-tests-hostpath-hcbfr deletion completed in 6.117026198s • [SLOW TEST:14.965 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:47:53.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Aug 12 11:47:53.734: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix950582772/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:47:53.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-x9ksw" for this suite. Aug 12 11:47:59.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:47:59.866: INFO: namespace: e2e-tests-kubectl-x9ksw, resource: bindings, ignored listing per whitelist Aug 12 11:47:59.963: INFO: namespace e2e-tests-kubectl-x9ksw deletion completed in 6.149402423s • [SLOW TEST:6.308 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:47:59.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dwbpp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dwbpp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dwbpp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dwbpp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dwbpp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dwbpp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 12 11:48:08.145: INFO: DNS probes using e2e-tests-dns-dwbpp/dns-test-abe1ac46-dc91-11ea-9b9c-0242ac11000c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:48:08.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-dwbpp" for this suite. Aug 12 11:48:16.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:48:16.244: INFO: namespace: e2e-tests-dns-dwbpp, resource: bindings, ignored listing per whitelist Aug 12 11:48:16.294: INFO: namespace e2e-tests-dns-dwbpp deletion completed in 8.086186352s • [SLOW TEST:16.331 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:48:16.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Aug 12 11:48:16.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 12 11:48:16.637: INFO: stderr: "" Aug 12 11:48:16.637: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:48:16.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-472fl" for this suite. Aug 12 11:48:22.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:48:22.687: INFO: namespace: e2e-tests-kubectl-472fl, resource: bindings, ignored listing per whitelist Aug 12 11:48:22.722: INFO: namespace e2e-tests-kubectl-472fl deletion completed in 6.080157085s • [SLOW TEST:6.428 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:48:22.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 12 11:48:22.813: INFO: Waiting up to 5m0s for pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-kb7fs" to be "success or failure" Aug 12 11:48:22.816: INFO: Pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322081ms Aug 12 11:48:25.106: INFO: Pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293782147s Aug 12 11:48:27.110: INFO: Pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297515029s Aug 12 11:48:29.118: INFO: Pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.305508221s STEP: Saw pod success Aug 12 11:48:29.118: INFO: Pod "pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:48:29.122: INFO: Trying to get logs from node hunter-worker2 pod pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 11:48:29.169: INFO: Waiting for pod pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:48:29.183: INFO: Pod pod-b973bb4f-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:48:29.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kb7fs" for this suite. Aug 12 11:48:35.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:48:35.296: INFO: namespace: e2e-tests-emptydir-kb7fs, resource: bindings, ignored listing per whitelist Aug 12 11:48:35.306: INFO: namespace e2e-tests-emptydir-kb7fs deletion completed in 6.116267738s • [SLOW TEST:12.583 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:48:35.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-c104e28d-dc91-11ea-9b9c-0242ac11000c STEP: Creating secret with name secret-projected-all-test-volume-c104e277-dc91-11ea-9b9c-0242ac11000c STEP: Creating a pod to test Check all projections for projected volume plugin Aug 12 11:48:35.576: INFO: Waiting up to 5m0s for pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-79b9w" to be "success or failure" Aug 12 11:48:35.686: INFO: Pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.61835ms Aug 12 11:48:37.690: INFO: Pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114401604s Aug 12 11:48:39.694: INFO: Pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118381832s Aug 12 11:48:41.697: INFO: Pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121627873s STEP: Saw pod success Aug 12 11:48:41.697: INFO: Pod "projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:48:41.699: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c container projected-all-volume-test: STEP: delete the pod Aug 12 11:48:41.730: INFO: Waiting for pod projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:48:41.757: INFO: Pod projected-volume-c104e221-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:48:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-79b9w" for this suite. Aug 12 11:48:47.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:48:47.979: INFO: namespace: e2e-tests-projected-79b9w, resource: bindings, ignored listing per whitelist Aug 12 11:48:48.039: INFO: namespace e2e-tests-projected-79b9w deletion completed in 6.278569516s • [SLOW TEST:12.734 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:48:48.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c891d51d-dc91-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 11:48:48.209: INFO: Waiting up to 5m0s for pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-w6w8t" to be "success or failure" Aug 12 11:48:48.212: INFO: Pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.097969ms Aug 12 11:48:50.216: INFO: Pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007021224s Aug 12 11:48:52.268: INFO: Pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.059780532s Aug 12 11:48:54.272: INFO: Pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063786816s STEP: Saw pod success Aug 12 11:48:54.272: INFO: Pod "pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:48:54.275: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 11:48:54.292: INFO: Waiting for pod pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:48:54.301: INFO: Pod pod-secrets-c8987efc-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:48:54.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w6w8t" for this suite. Aug 12 11:49:04.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:49:04.408: INFO: namespace: e2e-tests-secrets-w6w8t, resource: bindings, ignored listing per whitelist Aug 12 11:49:04.415: INFO: namespace e2e-tests-secrets-w6w8t deletion completed in 10.111235775s • [SLOW TEST:16.375 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:49:04.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 12 11:49:06.911: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:06.914: INFO: Number of nodes with available pods: 0 Aug 12 11:49:06.914: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:08.403: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:09.492: INFO: Number of nodes with available pods: 0 Aug 12 11:49:09.492: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:10.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:11.462: INFO: Number of nodes with available pods: 0 Aug 12 11:49:11.462: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:12.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:12.135: INFO: Number of nodes with available pods: 0 Aug 12 11:49:12.136: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:12.971: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:12.973: INFO: Number of nodes with available pods: 0 Aug 12 11:49:12.973: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:14.269: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:14.291: INFO: Number of nodes with available pods: 0 Aug 12 11:49:14.291: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:15.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:15.177: INFO: Number of nodes with available pods: 0 Aug 12 11:49:15.177: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:16.396: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:16.435: INFO: Number of nodes with available pods: 0 Aug 12 11:49:16.435: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:16.918: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:16.921: INFO: Number of nodes with available pods: 0 Aug 12 11:49:16.921: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:49:17.919: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:17.923: INFO: Number of nodes with available pods: 2 Aug 12 11:49:17.923: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 12 11:49:18.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:49:18.189: INFO: Number of nodes with available pods: 2 Aug 12 11:49:18.189: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rzxrn, will wait for the garbage collector to delete the pods Aug 12 11:49:19.564: INFO: Deleting DaemonSet.extensions daemon-set took: 5.070143ms Aug 12 11:49:19.664: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.231565ms Aug 12 11:49:23.616: INFO: Number of nodes with available pods: 0 Aug 12 11:49:23.616: INFO: Number of running nodes: 0, number of available pods: 0 Aug 12 11:49:23.618: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rzxrn/daemonsets","resourceVersion":"5898498"},"items":null} Aug 12 11:49:23.619: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rzxrn/pods","resourceVersion":"5898498"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:49:23.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-rzxrn" for this suite. Aug 12 11:49:31.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:49:31.729: INFO: namespace: e2e-tests-daemonsets-rzxrn, resource: bindings, ignored listing per whitelist Aug 12 11:49:31.775: INFO: namespace e2e-tests-daemonsets-rzxrn deletion completed in 8.147649253s • [SLOW TEST:27.360 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:49:31.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e2c32d18-dc91-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 11:49:32.211: INFO: Waiting up to 5m0s for pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-gmwpq" to be "success or failure" Aug 12 11:49:32.245: INFO: Pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.403784ms Aug 12 11:49:34.351: INFO: Pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14017834s Aug 12 11:49:36.355: INFO: Pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143898117s Aug 12 11:49:38.358: INFO: Pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146937204s STEP: Saw pod success Aug 12 11:49:38.358: INFO: Pod "pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:49:38.360: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 11:49:38.382: INFO: Waiting for pod pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:49:38.423: INFO: Pod pod-secrets-e2c9e1eb-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:49:38.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gmwpq" for this suite. Aug 12 11:49:44.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:49:44.495: INFO: namespace: e2e-tests-secrets-gmwpq, resource: bindings, ignored listing per whitelist Aug 12 11:49:44.588: INFO: namespace e2e-tests-secrets-gmwpq deletion completed in 6.160390246s • [SLOW TEST:12.812 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:49:44.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Aug 12 11:49:44.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:47.326: INFO: stderr: "" Aug 12 11:49:47.326: INFO: stdout: "pod/pause created\n" Aug 12 11:49:47.326: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 12 11:49:47.326: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-f2hdq" to be "running and ready" Aug 12 11:49:47.358: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.50881ms Aug 12 11:49:49.400: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073924598s Aug 12 11:49:51.404: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07807057s Aug 12 11:49:53.408: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.082284293s Aug 12 11:49:53.408: INFO: Pod "pause" satisfied condition "running and ready" Aug 12 11:49:53.408: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Aug 12 11:49:53.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:53.523: INFO: stderr: "" Aug 12 11:49:53.523: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 12 11:49:53.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:53.678: INFO: stderr: "" Aug 12 11:49:53.678: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 12 11:49:53.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:53.778: INFO: stderr: "" Aug 12 11:49:53.778: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 12 11:49:53.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:53.974: INFO: stderr: "" Aug 12 11:49:53.974: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Aug 12 11:49:53.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:54.610: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 12 11:49:54.610: INFO: stdout: "pod \"pause\" force deleted\n" Aug 12 11:49:54.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-f2hdq' Aug 12 11:49:54.849: INFO: stderr: "No resources found.\n" Aug 12 11:49:54.849: INFO: stdout: "" Aug 12 11:49:54.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-f2hdq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 12 11:49:54.957: INFO: stderr: "" Aug 12 11:49:54.957: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:49:54.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f2hdq" for this suite. Aug 12 11:50:01.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:50:01.051: INFO: namespace: e2e-tests-kubectl-f2hdq, resource: bindings, ignored listing per whitelist Aug 12 11:50:01.109: INFO: namespace e2e-tests-kubectl-f2hdq deletion completed in 6.148365512s • [SLOW TEST:16.521 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:50:01.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f4415305-dc91-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:50:01.530: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-lthpm" to be "success or failure" Aug 12 11:50:01.587: INFO: Pod "pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.738094ms Aug 12 11:50:03.683: INFO: Pod "pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152788813s Aug 12 11:50:05.695: INFO: Pod "pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16457622s STEP: Saw pod success Aug 12 11:50:05.695: INFO: Pod "pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:50:05.697: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 12 11:50:05.988: INFO: Waiting for pod pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c to disappear Aug 12 11:50:06.028: INFO: Pod pod-projected-configmaps-f4479974-dc91-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:50:06.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lthpm" for this suite. Aug 12 11:50:12.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:50:12.108: INFO: namespace: e2e-tests-projected-lthpm, resource: bindings, ignored listing per whitelist Aug 12 11:50:12.126: INFO: namespace e2e-tests-projected-lthpm deletion completed in 6.093047444s • [SLOW TEST:11.017 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:50:12.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 12 11:50:12.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:50:20.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-lprfw" for this suite. Aug 12 11:50:26.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:50:26.766: INFO: namespace: e2e-tests-init-container-lprfw, resource: bindings, ignored listing per whitelist Aug 12 11:50:26.778: INFO: namespace e2e-tests-init-container-lprfw deletion completed in 6.183631971s • [SLOW TEST:14.652 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:50:26.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 11:50:26.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fngth' Aug 12 11:50:27.015: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 12 11:50:27.015: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Aug 12 11:50:27.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-fngth' Aug 12 11:50:27.197: INFO: stderr: "" Aug 12 11:50:27.197: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:50:27.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fngth" for this suite. Aug 12 11:50:49.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:50:49.250: INFO: namespace: e2e-tests-kubectl-fngth, resource: bindings, ignored listing per whitelist Aug 12 11:50:49.345: INFO: namespace e2e-tests-kubectl-fngth deletion completed in 22.127827955s • [SLOW TEST:22.567 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:50:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:50:50.227: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Aug 12 11:50:50.231: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-plfcw/daemonsets","resourceVersion":"5898828"},"items":null} Aug 12 11:50:50.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-plfcw/pods","resourceVersion":"5898828"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:50:50.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-plfcw" for this suite. Aug 12 11:50:56.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:50:56.914: INFO: namespace: e2e-tests-daemonsets-plfcw, resource: bindings, ignored listing per whitelist Aug 12 11:50:56.985: INFO: namespace e2e-tests-daemonsets-plfcw deletion completed in 6.743472153s S [SKIPPING] [7.639 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:50:50.227: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:50:56.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-156b08c3-dc92-11ea-9b9c-0242ac11000c STEP: Creating secret with name s-test-opt-upd-156b093e-dc92-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-156b08c3-dc92-11ea-9b9c-0242ac11000c STEP: Updating secret s-test-opt-upd-156b093e-dc92-11ea-9b9c-0242ac11000c STEP: Creating secret with name s-test-opt-create-156b095c-dc92-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:52:37.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vqlc6" for this suite. Aug 12 11:52:59.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:53:00.014: INFO: namespace: e2e-tests-secrets-vqlc6, resource: bindings, ignored listing per whitelist Aug 12 11:53:00.022: INFO: namespace e2e-tests-secrets-vqlc6 deletion completed in 22.090561574s • [SLOW TEST:123.036 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:53:00.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:53:00.137: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 12 11:53:05.154: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 12 11:53:05.154: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 12 11:53:07.157: INFO: Creating deployment "test-rollover-deployment" Aug 12 11:53:07.207: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 12 11:53:09.213: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 12 11:53:09.220: INFO: Ensure that both replica sets have 1 created replica Aug 12 11:53:09.225: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 12 11:53:09.232: INFO: Updating deployment test-rollover-deployment Aug 12 11:53:09.232: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 12 11:53:11.246: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 12 11:53:11.252: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 12 11:53:11.259: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:11.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829989, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:13.267: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:13.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829992, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:15.267: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:15.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829992, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:17.268: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:17.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829992, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:19.268: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:19.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829992, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:21.268: INFO: all replica sets need to contain the pod-template-hash label Aug 12 11:53:21.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829992, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732829987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 11:53:23.361: INFO: Aug 12 11:53:23.361: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 12 11:53:23.368: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-8h9b7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8h9b7/deployments/test-rollover-deployment,UID:62f21454-dc92-11ea-b2c9-0242ac120008,ResourceVersion:5899239,Generation:2,CreationTimestamp:2020-08-12 11:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-12 11:53:07 +0000 UTC 2020-08-12 11:53:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-12 11:53:22 +0000 UTC 2020-08-12 11:53:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 12 11:53:23.370: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-8h9b7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8h9b7/replicasets/test-rollover-deployment-5b8479fdb6,UID:642eaf68-dc92-11ea-b2c9-0242ac120008,ResourceVersion:5899230,Generation:2,CreationTimestamp:2020-08-12 11:53:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 62f21454-dc92-11ea-b2c9-0242ac120008 0xc001178277 0xc001178278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 12 11:53:23.370: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 12 11:53:23.371: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-8h9b7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8h9b7/replicasets/test-rollover-controller,UID:5ec0621f-dc92-11ea-b2c9-0242ac120008,ResourceVersion:5899238,Generation:2,CreationTimestamp:2020-08-12 11:53:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 62f21454-dc92-11ea-b2c9-0242ac120008 0xc00160ff07 0xc00160ff08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 11:53:23.371: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-8h9b7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8h9b7/replicasets/test-rollover-deployment-58494b7559,UID:62fafcb6-dc92-11ea-b2c9-0242ac120008,ResourceVersion:5899194,Generation:2,CreationTimestamp:2020-08-12 11:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 62f21454-dc92-11ea-b2c9-0242ac120008 0xc001178027 0xc001178028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 11:53:23.374: INFO: Pod "test-rollover-deployment-5b8479fdb6-6dnfb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-6dnfb,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-8h9b7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8h9b7/pods/test-rollover-deployment-5b8479fdb6-6dnfb,UID:64416873-dc92-11ea-b2c9-0242ac120008,ResourceVersion:5899208,Generation:0,CreationTimestamp:2020-08-12 11:53:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 642eaf68-dc92-11ea-b2c9-0242ac120008 0xc000dbf497 0xc000dbf498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k265n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k265n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-k265n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000dbf5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000dbf5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:53:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:53:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:53:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 11:53:09 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.137,StartTime:2020-08-12 11:53:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-12 11:53:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://cb012a4deb3bb2a5e3b6b4a5716a0f436f9264d51cfe2c554cd4156be568e93b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:53:23.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8h9b7" for this suite. Aug 12 11:53:29.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:53:29.613: INFO: namespace: e2e-tests-deployment-8h9b7, resource: bindings, ignored listing per whitelist Aug 12 11:53:29.642: INFO: namespace e2e-tests-deployment-8h9b7 deletion completed in 6.265778295s • [SLOW TEST:29.620 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:53:29.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 12 11:53:29.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvlw7' Aug 12 11:53:30.137: INFO: stderr: "" Aug 12 11:53:30.137: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 12 11:53:31.141: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:31.141: INFO: Found 0 / 1 Aug 12 11:53:32.142: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:32.143: INFO: Found 0 / 1 Aug 12 11:53:33.166: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:33.166: INFO: Found 0 / 1 Aug 12 11:53:34.141: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:34.141: INFO: Found 1 / 1 Aug 12 11:53:34.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 12 11:53:34.144: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:34.145: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 12 11:53:34.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6kjvg --namespace=e2e-tests-kubectl-qvlw7 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 12 11:53:34.251: INFO: stderr: "" Aug 12 11:53:34.251: INFO: stdout: "pod/redis-master-6kjvg patched\n" STEP: checking annotations Aug 12 11:53:34.254: INFO: Selector matched 1 pods for map[app:redis] Aug 12 11:53:34.254: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:53:34.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qvlw7" for this suite. Aug 12 11:53:56.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:53:56.315: INFO: namespace: e2e-tests-kubectl-qvlw7, resource: bindings, ignored listing per whitelist Aug 12 11:53:56.359: INFO: namespace e2e-tests-kubectl-qvlw7 deletion completed in 22.10185488s • [SLOW TEST:26.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:53:56.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:53:56.448: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:53:57.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-7hxqh" for this suite. Aug 12 11:54:03.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:54:03.610: INFO: namespace: e2e-tests-custom-resource-definition-7hxqh, resource: bindings, ignored listing per whitelist Aug 12 11:54:03.633: INFO: namespace e2e-tests-custom-resource-definition-7hxqh deletion completed in 6.106236916s • [SLOW TEST:7.273 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:54:03.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 11:54:03.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-d4f6b" to be "success or failure" Aug 12 11:54:03.843: INFO: Pod "downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.235615ms Aug 12 11:54:05.847: INFO: Pod "downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044476661s Aug 12 11:54:07.852: INFO: Pod "downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04916883s STEP: Saw pod success Aug 12 11:54:07.852: INFO: Pod "downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:54:07.855: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 11:54:07.872: INFO: Waiting for pod downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c to disappear Aug 12 11:54:07.877: INFO: Pod downwardapi-volume-84b286e8-dc92-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:54:07.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d4f6b" for this suite. Aug 12 11:54:13.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:54:13.997: INFO: namespace: e2e-tests-projected-d4f6b, resource: bindings, ignored listing per whitelist Aug 12 11:54:13.999: INFO: namespace e2e-tests-projected-d4f6b deletion completed in 6.117635746s • [SLOW TEST:10.366 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:54:13.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-8ad9220f-dc92-11ea-9b9c-0242ac11000c STEP: Creating configMap with name cm-test-opt-upd-8ad92283-dc92-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8ad9220f-dc92-11ea-9b9c-0242ac11000c STEP: Updating configmap cm-test-opt-upd-8ad92283-dc92-11ea-9b9c-0242ac11000c STEP: Creating configMap with name cm-test-opt-create-8ad922b4-dc92-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:54:22.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8mk5h" for this suite. Aug 12 11:54:44.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:54:44.372: INFO: namespace: e2e-tests-configmap-8mk5h, resource: bindings, ignored listing per whitelist Aug 12 11:54:44.431: INFO: namespace e2e-tests-configmap-8mk5h deletion completed in 22.150855201s • [SLOW TEST:30.432 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:54:44.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9cff9c60-dc92-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 11:54:44.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-w99wq" to be "success or failure" Aug 12 11:54:44.626: INFO: Pod "pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.762768ms Aug 12 11:54:46.630: INFO: Pod "pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018071755s Aug 12 11:54:48.634: INFO: Pod "pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022475223s STEP: Saw pod success Aug 12 11:54:48.634: INFO: Pod "pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:54:48.637: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 11:54:48.701: INFO: Waiting for pod pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c to disappear Aug 12 11:54:48.757: INFO: Pod pod-configmaps-9d0035b1-dc92-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:54:48.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-w99wq" for this suite. Aug 12 11:54:54.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:54:54.799: INFO: namespace: e2e-tests-configmap-w99wq, resource: bindings, ignored listing per whitelist Aug 12 11:54:54.888: INFO: namespace e2e-tests-configmap-w99wq deletion completed in 6.126148725s • [SLOW TEST:10.457 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:54:54.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 11:54:54.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dxzls' Aug 12 11:54:55.106: INFO: stderr: "" Aug 12 11:54:55.106: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Aug 12 11:54:55.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-dxzls' Aug 12 11:55:07.474: INFO: stderr: "" Aug 12 11:55:07.474: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:55:07.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dxzls" for this suite. Aug 12 11:55:13.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:55:13.514: INFO: namespace: e2e-tests-kubectl-dxzls, resource: bindings, ignored listing per whitelist Aug 12 11:55:13.565: INFO: namespace e2e-tests-kubectl-dxzls deletion completed in 6.080944352s • [SLOW TEST:18.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:55:13.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ae5ea97c-dc92-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 11:55:13.835: INFO: Waiting up to 5m0s for pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-nlk6v" to be "success or failure" Aug 12 11:55:13.854: INFO: Pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.000591ms Aug 12 11:55:15.970: INFO: Pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135755938s Aug 12 11:55:17.975: INFO: Pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.140254316s Aug 12 11:55:19.982: INFO: Pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147759063s STEP: Saw pod success Aug 12 11:55:19.982: INFO: Pod "pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 11:55:19.985: INFO: Trying to get logs from node hunter-worker pod pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 11:55:20.005: INFO: Waiting for pod pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c to disappear Aug 12 11:55:20.071: INFO: Pod pod-secrets-ae5f4b04-dc92-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:55:20.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nlk6v" for this suite. Aug 12 11:55:26.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:55:26.116: INFO: namespace: e2e-tests-secrets-nlk6v, resource: bindings, ignored listing per whitelist Aug 12 11:55:26.170: INFO: namespace e2e-tests-secrets-nlk6v deletion completed in 6.095039078s • [SLOW TEST:12.605 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:55:26.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:55:26.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-5q6dw" for this suite. Aug 12 11:55:32.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:55:32.372: INFO: namespace: e2e-tests-services-5q6dw, resource: bindings, ignored listing per whitelist Aug 12 11:55:32.398: INFO: namespace e2e-tests-services-5q6dw deletion completed in 6.090278707s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.227 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:55:32.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 12 11:55:32.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-8pxl8 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 12 11:55:37.186: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0812 11:55:37.099322 2905 log.go:172] (0xc0001388f0) (0xc0000efd60) Create stream\nI0812 11:55:37.099380 2905 log.go:172] (0xc0001388f0) (0xc0000efd60) Stream added, broadcasting: 1\nI0812 11:55:37.101996 2905 log.go:172] (0xc0001388f0) Reply frame received for 1\nI0812 11:55:37.102034 2905 log.go:172] (0xc0001388f0) (0xc00086c000) Create stream\nI0812 11:55:37.102046 2905 log.go:172] (0xc0001388f0) (0xc00086c000) Stream added, broadcasting: 3\nI0812 11:55:37.102890 2905 log.go:172] (0xc0001388f0) Reply frame received for 3\nI0812 11:55:37.102933 2905 log.go:172] (0xc0001388f0) (0xc00086c0a0) Create stream\nI0812 11:55:37.102946 2905 log.go:172] (0xc0001388f0) (0xc00086c0a0) Stream added, broadcasting: 5\nI0812 11:55:37.103904 2905 log.go:172] (0xc0001388f0) Reply frame received for 5\nI0812 11:55:37.103971 2905 log.go:172] (0xc0001388f0) (0xc00073c1e0) Create stream\nI0812 11:55:37.103997 2905 log.go:172] (0xc0001388f0) (0xc00073c1e0) Stream added, broadcasting: 7\nI0812 11:55:37.105151 2905 log.go:172] (0xc0001388f0) Reply frame received for 7\nI0812 11:55:37.105471 2905 log.go:172] (0xc00086c000) (3) Writing data frame\nI0812 11:55:37.105698 2905 log.go:172] (0xc00086c000) (3) Writing data frame\nI0812 11:55:37.106616 2905 log.go:172] (0xc0001388f0) Data frame received for 5\nI0812 11:55:37.106637 2905 log.go:172] (0xc00086c0a0) (5) Data frame handling\nI0812 11:55:37.106652 2905 log.go:172] (0xc00086c0a0) (5) Data frame sent\nI0812 11:55:37.107275 2905 log.go:172] (0xc0001388f0) Data frame received for 5\nI0812 11:55:37.107308 2905 log.go:172] (0xc00086c0a0) (5) Data frame handling\nI0812 11:55:37.107339 2905 log.go:172] (0xc00086c0a0) (5) Data frame sent\nI0812 11:55:37.159641 2905 log.go:172] (0xc0001388f0) Data frame received for 7\nI0812 11:55:37.159686 2905 log.go:172] (0xc00073c1e0) (7) Data frame handling\nI0812 11:55:37.159719 2905 log.go:172] (0xc0001388f0) Data frame received for 5\nI0812 11:55:37.159735 2905 log.go:172] (0xc00086c0a0) (5) Data frame handling\nI0812 11:55:37.159795 2905 log.go:172] (0xc0001388f0) Data frame received for 1\nI0812 11:55:37.159827 2905 log.go:172] (0xc0000efd60) (1) Data frame handling\nI0812 11:55:37.159844 2905 log.go:172] (0xc0000efd60) (1) Data frame sent\nI0812 11:55:37.160228 2905 log.go:172] (0xc0001388f0) (0xc00086c000) Stream removed, broadcasting: 3\nI0812 11:55:37.160275 2905 log.go:172] (0xc0001388f0) (0xc0000efd60) Stream removed, broadcasting: 1\nI0812 11:55:37.160289 2905 log.go:172] (0xc0001388f0) Go away received\nI0812 11:55:37.160546 2905 log.go:172] (0xc0001388f0) (0xc0000efd60) Stream removed, broadcasting: 1\nI0812 11:55:37.160561 2905 log.go:172] (0xc0001388f0) (0xc00086c000) Stream removed, broadcasting: 3\nI0812 11:55:37.160568 2905 log.go:172] (0xc0001388f0) (0xc00086c0a0) Stream removed, broadcasting: 5\nI0812 11:55:37.160574 2905 log.go:172] (0xc0001388f0) (0xc00073c1e0) Stream removed, broadcasting: 7\n" Aug 12 11:55:37.186: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:55:39.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8pxl8" for this suite. Aug 12 11:55:49.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:55:49.283: INFO: namespace: e2e-tests-kubectl-8pxl8, resource: bindings, ignored listing per whitelist Aug 12 11:55:49.321: INFO: namespace e2e-tests-kubectl-8pxl8 deletion completed in 10.093604303s • [SLOW TEST:16.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:55:49.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 12 11:55:49.445: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:55:56.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7pd6v" for this suite. Aug 12 11:56:02.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:56:02.221: INFO: namespace: e2e-tests-init-container-7pd6v, resource: bindings, ignored listing per whitelist Aug 12 11:56:02.252: INFO: namespace e2e-tests-init-container-7pd6v deletion completed in 6.137752138s • [SLOW TEST:12.930 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:56:02.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:56:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 12 11:56:02.503: INFO: stderr: "" Aug 12 11:56:02.503: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-11T21:49:24Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:56:02.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b5hl6" for this suite. Aug 12 11:56:08.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:56:08.567: INFO: namespace: e2e-tests-kubectl-b5hl6, resource: bindings, ignored listing per whitelist Aug 12 11:56:08.590: INFO: namespace e2e-tests-kubectl-b5hl6 deletion completed in 6.081959024s • [SLOW TEST:6.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:56:08.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-77ll STEP: Creating a pod to test atomic-volume-subpath Aug 12 11:56:09.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-77ll" in namespace "e2e-tests-subpath-gpp4m" to be "success or failure" Aug 12 11:56:09.439: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Pending", Reason="", readiness=false. Elapsed: 67.034837ms Aug 12 11:56:11.485: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113658829s Aug 12 11:56:13.490: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118028179s Aug 12 11:56:15.494: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121965109s Aug 12 11:56:17.497: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 8.124909757s Aug 12 11:56:19.501: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 10.129176268s Aug 12 11:56:21.505: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 12.133531821s Aug 12 11:56:23.510: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 14.137871551s Aug 12 11:56:25.515: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 16.142893016s Aug 12 11:56:27.519: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 18.146984236s Aug 12 11:56:29.523: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 20.151152844s Aug 12 11:56:31.528: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 22.15597964s Aug 12 11:56:33.531: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Running", Reason="", readiness=false. Elapsed: 24.159501728s Aug 12 11:56:35.773: INFO: Pod "pod-subpath-test-projected-77ll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.401676636s STEP: Saw pod success Aug 12 11:56:35.774: INFO: Pod "pod-subpath-test-projected-77ll" satisfied condition "success or failure" Aug 12 11:56:35.776: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-77ll container test-container-subpath-projected-77ll: STEP: delete the pod Aug 12 11:56:35.817: INFO: Waiting for pod pod-subpath-test-projected-77ll to disappear Aug 12 11:56:35.827: INFO: Pod pod-subpath-test-projected-77ll no longer exists STEP: Deleting pod pod-subpath-test-projected-77ll Aug 12 11:56:35.827: INFO: Deleting pod "pod-subpath-test-projected-77ll" in namespace "e2e-tests-subpath-gpp4m" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:56:35.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-gpp4m" for this suite. Aug 12 11:56:41.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:56:42.024: INFO: namespace: e2e-tests-subpath-gpp4m, resource: bindings, ignored listing per whitelist Aug 12 11:56:42.058: INFO: namespace e2e-tests-subpath-gpp4m deletion completed in 6.225445663s • [SLOW TEST:33.467 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:56:42.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 11:56:42.718: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 12 11:56:42.737: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:42.739: INFO: Number of nodes with available pods: 0 Aug 12 11:56:42.739: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:43.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:43.747: INFO: Number of nodes with available pods: 0 Aug 12 11:56:43.747: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:44.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:44.748: INFO: Number of nodes with available pods: 0 Aug 12 11:56:44.748: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:45.743: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:45.746: INFO: Number of nodes with available pods: 0 Aug 12 11:56:45.746: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:46.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:46.747: INFO: Number of nodes with available pods: 0 Aug 12 11:56:46.747: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:47.798: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:47.802: INFO: Number of nodes with available pods: 1 Aug 12 11:56:47.802: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:56:48.840: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:48.966: INFO: Number of nodes with available pods: 2 Aug 12 11:56:48.966: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 12 11:56:49.425: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:49.425: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:49.560: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:50.642: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:50.642: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:50.647: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:51.630: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:51.631: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:51.631: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:51.635: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:52.804: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:52.804: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:52.804: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:52.809: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:53.563: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:53.564: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:53.564: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:53.567: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:54.714: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:54.714: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:54.714: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:54.718: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:55.564: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:55.564: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:55.564: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:55.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:56.564: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:56.565: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:56.565: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:56.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:57.607: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:57.608: INFO: Wrong image for pod: daemon-set-lgtj6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:57.608: INFO: Pod daemon-set-lgtj6 is not available Aug 12 11:56:57.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:58.662: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:58.662: INFO: Pod daemon-set-gphlt is not available Aug 12 11:56:58.665: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:56:59.600: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:56:59.600: INFO: Pod daemon-set-gphlt is not available Aug 12 11:56:59.603: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:00.565: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:57:00.565: INFO: Pod daemon-set-gphlt is not available Aug 12 11:57:00.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:01.582: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:57:01.587: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:02.588: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:57:02.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:03.564: INFO: Wrong image for pod: daemon-set-ct9kl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Aug 12 11:57:03.564: INFO: Pod daemon-set-ct9kl is not available Aug 12 11:57:03.568: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:04.564: INFO: Pod daemon-set-gpqnn is not available Aug 12 11:57:04.568: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 12 11:57:04.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:04.656: INFO: Number of nodes with available pods: 1 Aug 12 11:57:04.656: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:57:05.799: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:05.803: INFO: Number of nodes with available pods: 1 Aug 12 11:57:05.803: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:57:06.661: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:06.664: INFO: Number of nodes with available pods: 1 Aug 12 11:57:06.664: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:57:07.661: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:07.665: INFO: Number of nodes with available pods: 1 Aug 12 11:57:07.665: INFO: Node hunter-worker is running more than one daemon pod Aug 12 11:57:08.661: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 12 11:57:08.664: INFO: Number of nodes with available pods: 2 Aug 12 11:57:08.664: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b9c94, will wait for the garbage collector to delete the pods Aug 12 11:57:08.744: INFO: Deleting DaemonSet.extensions daemon-set took: 5.607494ms Aug 12 11:57:08.844: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.27152ms Aug 12 11:57:17.647: INFO: Number of nodes with available pods: 0 Aug 12 11:57:17.647: INFO: Number of running nodes: 0, number of available pods: 0 Aug 12 11:57:17.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b9c94/daemonsets","resourceVersion":"5900119"},"items":null} Aug 12 11:57:17.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b9c94/pods","resourceVersion":"5900119"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:57:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-b9c94" for this suite. Aug 12 11:57:25.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:57:25.782: INFO: namespace: e2e-tests-daemonsets-b9c94, resource: bindings, ignored listing per whitelist Aug 12 11:57:25.783: INFO: namespace e2e-tests-daemonsets-b9c94 deletion completed in 8.085597605s • [SLOW TEST:43.725 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:57:25.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 12 11:57:39.944: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:39.944: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:39.982634 6 log.go:172] (0xc0003b7c30) (0xc001cb85a0) Create stream I0812 11:57:39.982662 6 log.go:172] (0xc0003b7c30) (0xc001cb85a0) Stream added, broadcasting: 1 I0812 11:57:39.984808 6 log.go:172] (0xc0003b7c30) Reply frame received for 1 I0812 11:57:39.984834 6 log.go:172] (0xc0003b7c30) (0xc0011af720) Create stream I0812 11:57:39.984841 6 log.go:172] (0xc0003b7c30) (0xc0011af720) Stream added, broadcasting: 3 I0812 11:57:39.985940 6 log.go:172] (0xc0003b7c30) Reply frame received for 3 I0812 11:57:39.986001 6 log.go:172] (0xc0003b7c30) (0xc0015b5b80) Create stream I0812 11:57:39.986020 6 log.go:172] (0xc0003b7c30) (0xc0015b5b80) Stream added, broadcasting: 5 I0812 11:57:39.987175 6 log.go:172] (0xc0003b7c30) Reply frame received for 5 I0812 11:57:40.066054 6 log.go:172] (0xc0003b7c30) Data frame received for 3 I0812 11:57:40.066120 6 log.go:172] (0xc0011af720) (3) Data frame handling I0812 11:57:40.066151 6 log.go:172] (0xc0011af720) (3) Data frame sent I0812 11:57:40.066178 6 log.go:172] (0xc0003b7c30) Data frame received for 3 I0812 11:57:40.066194 6 log.go:172] (0xc0011af720) (3) Data frame handling I0812 11:57:40.066217 6 log.go:172] (0xc0003b7c30) Data frame received for 5 I0812 11:57:40.066239 6 log.go:172] (0xc0015b5b80) (5) Data frame handling I0812 11:57:40.067494 6 log.go:172] (0xc0003b7c30) Data frame received for 1 I0812 11:57:40.067530 6 log.go:172] (0xc001cb85a0) (1) Data frame handling I0812 11:57:40.067545 6 log.go:172] (0xc001cb85a0) (1) Data frame sent I0812 11:57:40.067584 6 log.go:172] (0xc0003b7c30) (0xc001cb85a0) Stream removed, broadcasting: 1 I0812 11:57:40.067608 6 log.go:172] (0xc0003b7c30) Go away received I0812 11:57:40.067706 6 log.go:172] (0xc0003b7c30) (0xc001cb85a0) Stream removed, broadcasting: 1 I0812 11:57:40.067719 6 log.go:172] (0xc0003b7c30) (0xc0011af720) Stream removed, broadcasting: 3 I0812 11:57:40.067725 6 log.go:172] (0xc0003b7c30) (0xc0015b5b80) Stream removed, broadcasting: 5 Aug 12 11:57:40.067: INFO: Exec stderr: "" Aug 12 11:57:40.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.067: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.098344 6 log.go:172] (0xc00102e210) (0xc001d6c000) Create stream I0812 11:57:40.098385 6 log.go:172] (0xc00102e210) (0xc001d6c000) Stream added, broadcasting: 1 I0812 11:57:40.101259 6 log.go:172] (0xc00102e210) Reply frame received for 1 I0812 11:57:40.101339 6 log.go:172] (0xc00102e210) (0xc001cb8640) Create stream I0812 11:57:40.101365 6 log.go:172] (0xc00102e210) (0xc001cb8640) Stream added, broadcasting: 3 I0812 11:57:40.102522 6 log.go:172] (0xc00102e210) Reply frame received for 3 I0812 11:57:40.102577 6 log.go:172] (0xc00102e210) (0xc0013646e0) Create stream I0812 11:57:40.102592 6 log.go:172] (0xc00102e210) (0xc0013646e0) Stream added, broadcasting: 5 I0812 11:57:40.103694 6 log.go:172] (0xc00102e210) Reply frame received for 5 I0812 11:57:40.165392 6 log.go:172] (0xc00102e210) Data frame received for 5 I0812 11:57:40.165429 6 log.go:172] (0xc0013646e0) (5) Data frame handling I0812 11:57:40.165449 6 log.go:172] (0xc00102e210) Data frame received for 3 I0812 11:57:40.165458 6 log.go:172] (0xc001cb8640) (3) Data frame handling I0812 11:57:40.165466 6 log.go:172] (0xc001cb8640) (3) Data frame sent I0812 11:57:40.165477 6 log.go:172] (0xc00102e210) Data frame received for 3 I0812 11:57:40.165483 6 log.go:172] (0xc001cb8640) (3) Data frame handling I0812 11:57:40.167010 6 log.go:172] (0xc00102e210) Data frame received for 1 I0812 11:57:40.167049 6 log.go:172] (0xc001d6c000) (1) Data frame handling I0812 11:57:40.167090 6 log.go:172] (0xc001d6c000) (1) Data frame sent I0812 11:57:40.167121 6 log.go:172] (0xc00102e210) (0xc001d6c000) Stream removed, broadcasting: 1 I0812 11:57:40.167155 6 log.go:172] (0xc00102e210) Go away received I0812 11:57:40.167258 6 log.go:172] (0xc00102e210) (0xc001d6c000) Stream removed, broadcasting: 1 I0812 11:57:40.167283 6 log.go:172] (0xc00102e210) (0xc001cb8640) Stream removed, broadcasting: 3 I0812 11:57:40.167308 6 log.go:172] (0xc00102e210) (0xc0013646e0) Stream removed, broadcasting: 5 Aug 12 11:57:40.167: INFO: Exec stderr: "" Aug 12 11:57:40.167: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.167: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.231197 6 log.go:172] (0xc00102e6e0) (0xc001d6c280) Create stream I0812 11:57:40.231231 6 log.go:172] (0xc00102e6e0) (0xc001d6c280) Stream added, broadcasting: 1 I0812 11:57:40.232622 6 log.go:172] (0xc00102e6e0) Reply frame received for 1 I0812 11:57:40.232652 6 log.go:172] (0xc00102e6e0) (0xc001d6c320) Create stream I0812 11:57:40.232661 6 log.go:172] (0xc00102e6e0) (0xc001d6c320) Stream added, broadcasting: 3 I0812 11:57:40.233332 6 log.go:172] (0xc00102e6e0) Reply frame received for 3 I0812 11:57:40.233361 6 log.go:172] (0xc00102e6e0) (0xc001676280) Create stream I0812 11:57:40.233374 6 log.go:172] (0xc00102e6e0) (0xc001676280) Stream added, broadcasting: 5 I0812 11:57:40.233951 6 log.go:172] (0xc00102e6e0) Reply frame received for 5 I0812 11:57:40.293192 6 log.go:172] (0xc00102e6e0) Data frame received for 5 I0812 11:57:40.293249 6 log.go:172] (0xc001676280) (5) Data frame handling I0812 11:57:40.293281 6 log.go:172] (0xc00102e6e0) Data frame received for 3 I0812 11:57:40.293304 6 log.go:172] (0xc001d6c320) (3) Data frame handling I0812 11:57:40.293328 6 log.go:172] (0xc001d6c320) (3) Data frame sent I0812 11:57:40.293343 6 log.go:172] (0xc00102e6e0) Data frame received for 3 I0812 11:57:40.293353 6 log.go:172] (0xc001d6c320) (3) Data frame handling I0812 11:57:40.294520 6 log.go:172] (0xc00102e6e0) Data frame received for 1 I0812 11:57:40.294553 6 log.go:172] (0xc001d6c280) (1) Data frame handling I0812 11:57:40.294569 6 log.go:172] (0xc001d6c280) (1) Data frame sent I0812 11:57:40.294584 6 log.go:172] (0xc00102e6e0) (0xc001d6c280) Stream removed, broadcasting: 1 I0812 11:57:40.294617 6 log.go:172] (0xc00102e6e0) Go away received I0812 11:57:40.294707 6 log.go:172] (0xc00102e6e0) (0xc001d6c280) Stream removed, broadcasting: 1 I0812 11:57:40.294729 6 log.go:172] (0xc00102e6e0) (0xc001d6c320) Stream removed, broadcasting: 3 I0812 11:57:40.294741 6 log.go:172] (0xc00102e6e0) (0xc001676280) Stream removed, broadcasting: 5 Aug 12 11:57:40.294: INFO: Exec stderr: "" Aug 12 11:57:40.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.294: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.330452 6 log.go:172] (0xc00102ebb0) (0xc001d6c5a0) Create stream I0812 11:57:40.330488 6 log.go:172] (0xc00102ebb0) (0xc001d6c5a0) Stream added, broadcasting: 1 I0812 11:57:40.332530 6 log.go:172] (0xc00102ebb0) Reply frame received for 1 I0812 11:57:40.332565 6 log.go:172] (0xc00102ebb0) (0xc001364780) Create stream I0812 11:57:40.332579 6 log.go:172] (0xc00102ebb0) (0xc001364780) Stream added, broadcasting: 3 I0812 11:57:40.333676 6 log.go:172] (0xc00102ebb0) Reply frame received for 3 I0812 11:57:40.333707 6 log.go:172] (0xc00102ebb0) (0xc001cb86e0) Create stream I0812 11:57:40.333716 6 log.go:172] (0xc00102ebb0) (0xc001cb86e0) Stream added, broadcasting: 5 I0812 11:57:40.334786 6 log.go:172] (0xc00102ebb0) Reply frame received for 5 I0812 11:57:40.391545 6 log.go:172] (0xc00102ebb0) Data frame received for 5 I0812 11:57:40.391606 6 log.go:172] (0xc001cb86e0) (5) Data frame handling I0812 11:57:40.391658 6 log.go:172] (0xc00102ebb0) Data frame received for 3 I0812 11:57:40.391697 6 log.go:172] (0xc001364780) (3) Data frame handling I0812 11:57:40.391735 6 log.go:172] (0xc001364780) (3) Data frame sent I0812 11:57:40.391756 6 log.go:172] (0xc00102ebb0) Data frame received for 3 I0812 11:57:40.391778 6 log.go:172] (0xc001364780) (3) Data frame handling I0812 11:57:40.393299 6 log.go:172] (0xc00102ebb0) Data frame received for 1 I0812 11:57:40.393329 6 log.go:172] (0xc001d6c5a0) (1) Data frame handling I0812 11:57:40.393370 6 log.go:172] (0xc001d6c5a0) (1) Data frame sent I0812 11:57:40.393413 6 log.go:172] (0xc00102ebb0) (0xc001d6c5a0) Stream removed, broadcasting: 1 I0812 11:57:40.393451 6 log.go:172] (0xc00102ebb0) Go away received I0812 11:57:40.393570 6 log.go:172] (0xc00102ebb0) (0xc001d6c5a0) Stream removed, broadcasting: 1 I0812 11:57:40.393601 6 log.go:172] (0xc00102ebb0) (0xc001364780) Stream removed, broadcasting: 3 I0812 11:57:40.393623 6 log.go:172] (0xc00102ebb0) (0xc001cb86e0) Stream removed, broadcasting: 5 Aug 12 11:57:40.393: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 12 11:57:40.393: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.393: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.425279 6 log.go:172] (0xc0017682c0) (0xc001364be0) Create stream I0812 11:57:40.425307 6 log.go:172] (0xc0017682c0) (0xc001364be0) Stream added, broadcasting: 1 I0812 11:57:40.427922 6 log.go:172] (0xc0017682c0) Reply frame received for 1 I0812 11:57:40.427953 6 log.go:172] (0xc0017682c0) (0xc001364d20) Create stream I0812 11:57:40.427963 6 log.go:172] (0xc0017682c0) (0xc001364d20) Stream added, broadcasting: 3 I0812 11:57:40.428942 6 log.go:172] (0xc0017682c0) Reply frame received for 3 I0812 11:57:40.428974 6 log.go:172] (0xc0017682c0) (0xc001364e60) Create stream I0812 11:57:40.428985 6 log.go:172] (0xc0017682c0) (0xc001364e60) Stream added, broadcasting: 5 I0812 11:57:40.429874 6 log.go:172] (0xc0017682c0) Reply frame received for 5 I0812 11:57:40.496004 6 log.go:172] (0xc0017682c0) Data frame received for 5 I0812 11:57:40.496108 6 log.go:172] (0xc001364e60) (5) Data frame handling I0812 11:57:40.496177 6 log.go:172] (0xc0017682c0) Data frame received for 3 I0812 11:57:40.496200 6 log.go:172] (0xc001364d20) (3) Data frame handling I0812 11:57:40.496220 6 log.go:172] (0xc001364d20) (3) Data frame sent I0812 11:57:40.496249 6 log.go:172] (0xc0017682c0) Data frame received for 3 I0812 11:57:40.496264 6 log.go:172] (0xc001364d20) (3) Data frame handling I0812 11:57:40.497859 6 log.go:172] (0xc0017682c0) Data frame received for 1 I0812 11:57:40.497899 6 log.go:172] (0xc001364be0) (1) Data frame handling I0812 11:57:40.497935 6 log.go:172] (0xc001364be0) (1) Data frame sent I0812 11:57:40.497967 6 log.go:172] (0xc0017682c0) (0xc001364be0) Stream removed, broadcasting: 1 I0812 11:57:40.498037 6 log.go:172] (0xc0017682c0) Go away received I0812 11:57:40.498115 6 log.go:172] (0xc0017682c0) (0xc001364be0) Stream removed, broadcasting: 1 I0812 11:57:40.498144 6 log.go:172] (0xc0017682c0) (0xc001364d20) Stream removed, broadcasting: 3 I0812 11:57:40.498163 6 log.go:172] (0xc0017682c0) (0xc001364e60) Stream removed, broadcasting: 5 Aug 12 11:57:40.498: INFO: Exec stderr: "" Aug 12 11:57:40.498: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.498: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.546055 6 log.go:172] (0xc001d44160) (0xc001cb8960) Create stream I0812 11:57:40.546089 6 log.go:172] (0xc001d44160) (0xc001cb8960) Stream added, broadcasting: 1 I0812 11:57:40.548133 6 log.go:172] (0xc001d44160) Reply frame received for 1 I0812 11:57:40.548182 6 log.go:172] (0xc001d44160) (0xc001d6c640) Create stream I0812 11:57:40.548212 6 log.go:172] (0xc001d44160) (0xc001d6c640) Stream added, broadcasting: 3 I0812 11:57:40.549198 6 log.go:172] (0xc001d44160) Reply frame received for 3 I0812 11:57:40.549239 6 log.go:172] (0xc001d44160) (0xc0011af7c0) Create stream I0812 11:57:40.549253 6 log.go:172] (0xc001d44160) (0xc0011af7c0) Stream added, broadcasting: 5 I0812 11:57:40.549961 6 log.go:172] (0xc001d44160) Reply frame received for 5 I0812 11:57:40.601678 6 log.go:172] (0xc001d44160) Data frame received for 3 I0812 11:57:40.601706 6 log.go:172] (0xc001d6c640) (3) Data frame handling I0812 11:57:40.601732 6 log.go:172] (0xc001d6c640) (3) Data frame sent I0812 11:57:40.601744 6 log.go:172] (0xc001d44160) Data frame received for 3 I0812 11:57:40.601772 6 log.go:172] (0xc001d6c640) (3) Data frame handling I0812 11:57:40.602013 6 log.go:172] (0xc001d44160) Data frame received for 5 I0812 11:57:40.602058 6 log.go:172] (0xc0011af7c0) (5) Data frame handling I0812 11:57:40.603328 6 log.go:172] (0xc001d44160) Data frame received for 1 I0812 11:57:40.603353 6 log.go:172] (0xc001cb8960) (1) Data frame handling I0812 11:57:40.603362 6 log.go:172] (0xc001cb8960) (1) Data frame sent I0812 11:57:40.603381 6 log.go:172] (0xc001d44160) (0xc001cb8960) Stream removed, broadcasting: 1 I0812 11:57:40.603397 6 log.go:172] (0xc001d44160) Go away received I0812 11:57:40.603483 6 log.go:172] (0xc001d44160) (0xc001cb8960) Stream removed, broadcasting: 1 I0812 11:57:40.603505 6 log.go:172] (0xc001d44160) (0xc001d6c640) Stream removed, broadcasting: 3 I0812 11:57:40.603517 6 log.go:172] (0xc001d44160) (0xc0011af7c0) Stream removed, broadcasting: 5 Aug 12 11:57:40.603: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 12 11:57:40.603: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.603: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.634646 6 log.go:172] (0xc00094f810) (0xc0011afa40) Create stream I0812 11:57:40.634692 6 log.go:172] (0xc00094f810) (0xc0011afa40) Stream added, broadcasting: 1 I0812 11:57:40.636825 6 log.go:172] (0xc00094f810) Reply frame received for 1 I0812 11:57:40.636857 6 log.go:172] (0xc00094f810) (0xc001364fa0) Create stream I0812 11:57:40.636867 6 log.go:172] (0xc00094f810) (0xc001364fa0) Stream added, broadcasting: 3 I0812 11:57:40.637590 6 log.go:172] (0xc00094f810) Reply frame received for 3 I0812 11:57:40.637624 6 log.go:172] (0xc00094f810) (0xc001cb8a00) Create stream I0812 11:57:40.637636 6 log.go:172] (0xc00094f810) (0xc001cb8a00) Stream added, broadcasting: 5 I0812 11:57:40.638289 6 log.go:172] (0xc00094f810) Reply frame received for 5 I0812 11:57:40.713209 6 log.go:172] (0xc00094f810) Data frame received for 5 I0812 11:57:40.713257 6 log.go:172] (0xc001cb8a00) (5) Data frame handling I0812 11:57:40.713288 6 log.go:172] (0xc00094f810) Data frame received for 3 I0812 11:57:40.713299 6 log.go:172] (0xc001364fa0) (3) Data frame handling I0812 11:57:40.713311 6 log.go:172] (0xc001364fa0) (3) Data frame sent I0812 11:57:40.713321 6 log.go:172] (0xc00094f810) Data frame received for 3 I0812 11:57:40.713329 6 log.go:172] (0xc001364fa0) (3) Data frame handling I0812 11:57:40.714679 6 log.go:172] (0xc00094f810) Data frame received for 1 I0812 11:57:40.714700 6 log.go:172] (0xc0011afa40) (1) Data frame handling I0812 11:57:40.714713 6 log.go:172] (0xc0011afa40) (1) Data frame sent I0812 11:57:40.714723 6 log.go:172] (0xc00094f810) (0xc0011afa40) Stream removed, broadcasting: 1 I0812 11:57:40.714789 6 log.go:172] (0xc00094f810) (0xc0011afa40) Stream removed, broadcasting: 1 I0812 11:57:40.714804 6 log.go:172] (0xc00094f810) (0xc001364fa0) Stream removed, broadcasting: 3 I0812 11:57:40.714932 6 log.go:172] (0xc00094f810) Go away received I0812 11:57:40.715051 6 log.go:172] (0xc00094f810) (0xc001cb8a00) Stream removed, broadcasting: 5 Aug 12 11:57:40.715: INFO: Exec stderr: "" Aug 12 11:57:40.715: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.715: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.741953 6 log.go:172] (0xc001768790) (0xc001365360) Create stream I0812 11:57:40.741970 6 log.go:172] (0xc001768790) (0xc001365360) Stream added, broadcasting: 1 I0812 11:57:40.744862 6 log.go:172] (0xc001768790) Reply frame received for 1 I0812 11:57:40.744895 6 log.go:172] (0xc001768790) (0xc001d6c6e0) Create stream I0812 11:57:40.744908 6 log.go:172] (0xc001768790) (0xc001d6c6e0) Stream added, broadcasting: 3 I0812 11:57:40.746039 6 log.go:172] (0xc001768790) Reply frame received for 3 I0812 11:57:40.746063 6 log.go:172] (0xc001768790) (0xc001cb8aa0) Create stream I0812 11:57:40.746078 6 log.go:172] (0xc001768790) (0xc001cb8aa0) Stream added, broadcasting: 5 I0812 11:57:40.746954 6 log.go:172] (0xc001768790) Reply frame received for 5 I0812 11:57:40.811235 6 log.go:172] (0xc001768790) Data frame received for 3 I0812 11:57:40.811278 6 log.go:172] (0xc001d6c6e0) (3) Data frame handling I0812 11:57:40.811296 6 log.go:172] (0xc001d6c6e0) (3) Data frame sent I0812 11:57:40.811319 6 log.go:172] (0xc001768790) Data frame received for 3 I0812 11:57:40.811338 6 log.go:172] (0xc001d6c6e0) (3) Data frame handling I0812 11:57:40.811385 6 log.go:172] (0xc001768790) Data frame received for 5 I0812 11:57:40.811409 6 log.go:172] (0xc001cb8aa0) (5) Data frame handling I0812 11:57:40.813205 6 log.go:172] (0xc001768790) Data frame received for 1 I0812 11:57:40.813228 6 log.go:172] (0xc001365360) (1) Data frame handling I0812 11:57:40.813241 6 log.go:172] (0xc001365360) (1) Data frame sent I0812 11:57:40.813259 6 log.go:172] (0xc001768790) (0xc001365360) Stream removed, broadcasting: 1 I0812 11:57:40.813274 6 log.go:172] (0xc001768790) Go away received I0812 11:57:40.813362 6 log.go:172] (0xc001768790) (0xc001365360) Stream removed, broadcasting: 1 I0812 11:57:40.813380 6 log.go:172] (0xc001768790) (0xc001d6c6e0) Stream removed, broadcasting: 3 I0812 11:57:40.813398 6 log.go:172] (0xc001768790) (0xc001cb8aa0) Stream removed, broadcasting: 5 Aug 12 11:57:40.813: INFO: Exec stderr: "" Aug 12 11:57:40.813: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.813: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.834680 6 log.go:172] (0xc001768c60) (0xc001365680) Create stream I0812 11:57:40.834700 6 log.go:172] (0xc001768c60) (0xc001365680) Stream added, broadcasting: 1 I0812 11:57:40.840981 6 log.go:172] (0xc001768c60) Reply frame received for 1 I0812 11:57:40.844929 6 log.go:172] (0xc001768c60) (0xc000b8c1e0) Create stream I0812 11:57:40.844970 6 log.go:172] (0xc001768c60) (0xc000b8c1e0) Stream added, broadcasting: 3 I0812 11:57:40.848253 6 log.go:172] (0xc001768c60) Reply frame received for 3 I0812 11:57:40.848303 6 log.go:172] (0xc001768c60) (0xc0015b4000) Create stream I0812 11:57:40.848316 6 log.go:172] (0xc001768c60) (0xc0015b4000) Stream added, broadcasting: 5 I0812 11:57:40.849154 6 log.go:172] (0xc001768c60) Reply frame received for 5 I0812 11:57:40.917251 6 log.go:172] (0xc001768c60) Data frame received for 5 I0812 11:57:40.917319 6 log.go:172] (0xc0015b4000) (5) Data frame handling I0812 11:57:40.917353 6 log.go:172] (0xc001768c60) Data frame received for 3 I0812 11:57:40.917370 6 log.go:172] (0xc000b8c1e0) (3) Data frame handling I0812 11:57:40.917401 6 log.go:172] (0xc000b8c1e0) (3) Data frame sent I0812 11:57:40.917418 6 log.go:172] (0xc001768c60) Data frame received for 3 I0812 11:57:40.917430 6 log.go:172] (0xc000b8c1e0) (3) Data frame handling I0812 11:57:40.918837 6 log.go:172] (0xc001768c60) Data frame received for 1 I0812 11:57:40.918864 6 log.go:172] (0xc001365680) (1) Data frame handling I0812 11:57:40.918879 6 log.go:172] (0xc001365680) (1) Data frame sent I0812 11:57:40.918891 6 log.go:172] (0xc001768c60) (0xc001365680) Stream removed, broadcasting: 1 I0812 11:57:40.918977 6 log.go:172] (0xc001768c60) (0xc001365680) Stream removed, broadcasting: 1 I0812 11:57:40.919049 6 log.go:172] (0xc001768c60) (0xc000b8c1e0) Stream removed, broadcasting: 3 I0812 11:57:40.919059 6 log.go:172] (0xc001768c60) (0xc0015b4000) Stream removed, broadcasting: 5 Aug 12 11:57:40.919: INFO: Exec stderr: "" Aug 12 11:57:40.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-qtq2f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 11:57:40.919: INFO: >>> kubeConfig: /root/.kube/config I0812 11:57:40.919154 6 log.go:172] (0xc001768c60) Go away received I0812 11:57:40.947221 6 log.go:172] (0xc0003b7c30) (0xc00062c780) Create stream I0812 11:57:40.947244 6 log.go:172] (0xc0003b7c30) (0xc00062c780) Stream added, broadcasting: 1 I0812 11:57:40.948851 6 log.go:172] (0xc0003b7c30) Reply frame received for 1 I0812 11:57:40.948904 6 log.go:172] (0xc0003b7c30) (0xc0003b2000) Create stream I0812 11:57:40.948920 6 log.go:172] (0xc0003b7c30) (0xc0003b2000) Stream added, broadcasting: 3 I0812 11:57:40.949797 6 log.go:172] (0xc0003b7c30) Reply frame received for 3 I0812 11:57:40.949836 6 log.go:172] (0xc0003b7c30) (0xc0003b2280) Create stream I0812 11:57:40.949844 6 log.go:172] (0xc0003b7c30) (0xc0003b2280) Stream added, broadcasting: 5 I0812 11:57:40.950666 6 log.go:172] (0xc0003b7c30) Reply frame received for 5 I0812 11:57:41.030997 6 log.go:172] (0xc0003b7c30) Data frame received for 5 I0812 11:57:41.031028 6 log.go:172] (0xc0003b2280) (5) Data frame handling I0812 11:57:41.031058 6 log.go:172] (0xc0003b7c30) Data frame received for 3 I0812 11:57:41.031083 6 log.go:172] (0xc0003b2000) (3) Data frame handling I0812 11:57:41.031097 6 log.go:172] (0xc0003b2000) (3) Data frame sent I0812 11:57:41.031108 6 log.go:172] (0xc0003b7c30) Data frame received for 3 I0812 11:57:41.031114 6 log.go:172] (0xc0003b2000) (3) Data frame handling I0812 11:57:41.032268 6 log.go:172] (0xc0003b7c30) Data frame received for 1 I0812 11:57:41.032305 6 log.go:172] (0xc00062c780) (1) Data frame handling I0812 11:57:41.032330 6 log.go:172] (0xc00062c780) (1) Data frame sent I0812 11:57:41.032354 6 log.go:172] (0xc0003b7c30) (0xc00062c780) Stream removed, broadcasting: 1 I0812 11:57:41.032397 6 log.go:172] (0xc0003b7c30) Go away received I0812 11:57:41.032486 6 log.go:172] (0xc0003b7c30) (0xc00062c780) Stream removed, broadcasting: 1 I0812 11:57:41.032513 6 log.go:172] (0xc0003b7c30) (0xc0003b2000) Stream removed, broadcasting: 3 I0812 11:57:41.032536 6 log.go:172] (0xc0003b7c30) (0xc0003b2280) Stream removed, broadcasting: 5 Aug 12 11:57:41.032: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:57:41.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-qtq2f" for this suite. Aug 12 11:58:31.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:58:31.209: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-qtq2f, resource: bindings, ignored listing per whitelist Aug 12 11:58:31.238: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-qtq2f deletion completed in 50.201430418s • [SLOW TEST:65.455 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:58:31.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 12 11:58:31.346: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900355,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 12 11:58:31.346: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900356,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 12 11:58:31.346: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900357,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 12 11:58:41.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900378,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 12 11:58:41.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900379,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 12 11:58:41.413: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jbv5f,SelfLink:/api/v1/namespaces/e2e-tests-watch-jbv5f/configmaps/e2e-watch-test-label-changed,UID:2429e311-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5900380,Generation:0,CreationTimestamp:2020-08-12 11:58:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 11:58:41.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jbv5f" for this suite. Aug 12 11:58:47.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 11:58:47.473: INFO: namespace: e2e-tests-watch-jbv5f, resource: bindings, ignored listing per whitelist Aug 12 11:58:47.523: INFO: namespace e2e-tests-watch-jbv5f deletion completed in 6.106004484s • [SLOW TEST:16.285 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 11:58:47.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-sb27x [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Aug 12 11:58:47.673: INFO: Found 0 stateful pods, waiting for 3 Aug 12 11:58:57.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:58:57.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:58:57.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 12 11:59:07.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:07.677: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:07.677: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 12 11:59:07.701: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 12 11:59:18.028: INFO: Updating stateful set ss2 Aug 12 11:59:18.037: INFO: Waiting for Pod e2e-tests-statefulset-sb27x/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Aug 12 11:59:28.831: INFO: Found 2 stateful pods, waiting for 3 Aug 12 11:59:38.834: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:38.834: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:38.834: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 12 11:59:48.854: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:48.854: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 12 11:59:48.854: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 12 11:59:48.874: INFO: Updating stateful set ss2 Aug 12 11:59:48.991: INFO: Waiting for Pod e2e-tests-statefulset-sb27x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 12 11:59:59.013: INFO: Updating stateful set ss2 Aug 12 11:59:59.065: INFO: Waiting for StatefulSet e2e-tests-statefulset-sb27x/ss2 to complete update Aug 12 11:59:59.065: INFO: Waiting for Pod e2e-tests-statefulset-sb27x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 12 12:00:09.170: INFO: Waiting for StatefulSet e2e-tests-statefulset-sb27x/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 12 12:00:19.097: INFO: Deleting all statefulset in ns e2e-tests-statefulset-sb27x Aug 12 12:00:19.099: INFO: Scaling statefulset ss2 to 0 Aug 12 12:00:49.123: INFO: Waiting for statefulset status.replicas updated to 0 Aug 12 12:00:49.127: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:00:49.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-sb27x" for this suite. Aug 12 12:00:59.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:00:59.235: INFO: namespace: e2e-tests-statefulset-sb27x, resource: bindings, ignored listing per whitelist Aug 12 12:00:59.245: INFO: namespace e2e-tests-statefulset-sb27x deletion completed in 10.095153159s • [SLOW TEST:131.722 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:00:59.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-7c68122c-dc93-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:01:05.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cqxc8" for this suite. Aug 12 12:01:27.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:01:27.479: INFO: namespace: e2e-tests-configmap-cqxc8, resource: bindings, ignored listing per whitelist Aug 12 12:01:27.515: INFO: namespace e2e-tests-configmap-cqxc8 deletion completed in 22.106789679s • [SLOW TEST:28.270 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:01:27.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-6l8zb Aug 12 12:01:31.652: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-6l8zb STEP: checking the pod's current state and verifying that restartCount is present Aug 12 12:01:31.655: INFO: Initial restart count of pod liveness-http is 0 Aug 12 12:01:51.701: INFO: Restart count of pod e2e-tests-container-probe-6l8zb/liveness-http is now 1 (20.046015357s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:01:51.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6l8zb" for this suite. Aug 12 12:01:57.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:01:57.774: INFO: namespace: e2e-tests-container-probe-6l8zb, resource: bindings, ignored listing per whitelist Aug 12 12:01:57.823: INFO: namespace e2e-tests-container-probe-6l8zb deletion completed in 6.095581856s • [SLOW TEST:30.308 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:01:57.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:01:57.959: INFO: Creating deployment "test-recreate-deployment" Aug 12 12:01:57.962: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 12 12:01:57.974: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Aug 12 12:01:59.980: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 12 12:01:59.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732830518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732830518, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732830518, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732830517, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 12 12:02:01.986: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 12 12:02:01.991: INFO: Updating deployment test-recreate-deployment Aug 12 12:02:01.991: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 12 12:02:02.577: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-r9qr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9qr9/deployments/test-recreate-deployment,UID:9f53ff03-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5901125,Generation:2,CreationTimestamp:2020-08-12 12:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-12 12:02:02 +0000 UTC 2020-08-12 12:02:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-12 12:02:02 +0000 UTC 2020-08-12 12:01:57 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 12 12:02:02.615: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-r9qr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9qr9/replicasets/test-recreate-deployment-589c4bfd,UID:a1cc959f-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5901124,Generation:1,CreationTimestamp:2020-08-12 12:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9f53ff03-dc93-11ea-b2c9-0242ac120008 0xc000fe405f 0xc000fe4070}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 12:02:02.615: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 12 12:02:02.616: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-r9qr9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9qr9/replicasets/test-recreate-deployment-5bf7f65dc,UID:9f561230-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5901113,Generation:2,CreationTimestamp:2020-08-12 12:01:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9f53ff03-dc93-11ea-b2c9-0242ac120008 0xc000fe4140 0xc000fe4141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 12:02:02.630: INFO: Pod "test-recreate-deployment-589c4bfd-fkdsc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-fkdsc,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-r9qr9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9qr9/pods/test-recreate-deployment-589c4bfd-fkdsc,UID:a1cfe762-dc93-11ea-b2c9-0242ac120008,ResourceVersion:5901127,Generation:0,CreationTimestamp:2020-08-12 12:02:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a1cc959f-dc93-11ea-b2c9-0242ac120008 0xc001712d6f 0xc001712d80}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sxhhf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxhhf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sxhhf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001712df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001712e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:02:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:02:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:02:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:02:02 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:02:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:02:02.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-r9qr9" for this suite. Aug 12 12:02:10.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:02:10.936: INFO: namespace: e2e-tests-deployment-r9qr9, resource: bindings, ignored listing per whitelist Aug 12 12:02:10.951: INFO: namespace e2e-tests-deployment-r9qr9 deletion completed in 8.317719447s • [SLOW TEST:13.128 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:02:10.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a7276941-dc93-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:02:11.137: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-vgdfp" to be "success or failure" Aug 12 12:02:11.191: INFO: Pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.274398ms Aug 12 12:02:13.195: INFO: Pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057822075s Aug 12 12:02:15.198: INFO: Pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.060915626s Aug 12 12:02:17.202: INFO: Pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065173021s STEP: Saw pod success Aug 12 12:02:17.202: INFO: Pod "pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:02:17.205: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 12 12:02:17.237: INFO: Waiting for pod pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c to disappear Aug 12 12:02:17.334: INFO: Pod pod-projected-secrets-a72c6151-dc93-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:02:17.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vgdfp" for this suite. Aug 12 12:02:23.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:02:23.400: INFO: namespace: e2e-tests-projected-vgdfp, resource: bindings, ignored listing per whitelist Aug 12 12:02:23.477: INFO: namespace e2e-tests-projected-vgdfp deletion completed in 6.139709319s • [SLOW TEST:12.526 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:02:23.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 12:02:23.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-g6j6q" to be "success or failure" Aug 12 12:02:23.600: INFO: Pod "downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.713273ms Aug 12 12:02:25.604: INFO: Pod "downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007297839s Aug 12 12:02:27.623: INFO: Pod "downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026114045s STEP: Saw pod success Aug 12 12:02:27.623: INFO: Pod "downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:02:27.626: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 12:02:27.713: INFO: Waiting for pod downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c to disappear Aug 12 12:02:27.790: INFO: Pod downwardapi-volume-ae97da4c-dc93-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:02:27.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g6j6q" for this suite. Aug 12 12:02:33.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:02:33.874: INFO: namespace: e2e-tests-downward-api-g6j6q, resource: bindings, ignored listing per whitelist Aug 12 12:02:33.894: INFO: namespace e2e-tests-downward-api-g6j6q deletion completed in 6.100075361s • [SLOW TEST:10.417 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:02:33.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b4d6d060-dc93-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:02:34.123: INFO: Waiting up to 5m0s for pod "pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-h5sx6" to be "success or failure" Aug 12 12:02:34.228: INFO: Pod "pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 104.293057ms Aug 12 12:02:36.232: INFO: Pod "pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108667592s Aug 12 12:02:38.236: INFO: Pod "pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112390626s STEP: Saw pod success Aug 12 12:02:38.236: INFO: Pod "pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:02:38.239: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c container secret-volume-test: STEP: delete the pod Aug 12 12:02:38.262: INFO: Waiting for pod pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c to disappear Aug 12 12:02:38.303: INFO: Pod pod-secrets-b4e0d05c-dc93-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:02:38.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h5sx6" for this suite. Aug 12 12:02:44.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:02:44.365: INFO: namespace: e2e-tests-secrets-h5sx6, resource: bindings, ignored listing per whitelist Aug 12 12:02:44.402: INFO: namespace e2e-tests-secrets-h5sx6 deletion completed in 6.094503285s STEP: Destroying namespace "e2e-tests-secret-namespace-zx89t" for this suite. Aug 12 12:02:50.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:02:50.520: INFO: namespace: e2e-tests-secret-namespace-zx89t, resource: bindings, ignored listing per whitelist Aug 12 12:02:50.585: INFO: namespace e2e-tests-secret-namespace-zx89t deletion completed in 6.183321725s • [SLOW TEST:16.691 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:02:50.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-bec5820a-dc93-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:02:50.730: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-fspmw" to be "success or failure" Aug 12 12:02:50.734: INFO: Pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329314ms Aug 12 12:02:52.738: INFO: Pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008186617s Aug 12 12:02:54.741: INFO: Pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.011652785s Aug 12 12:02:56.744: INFO: Pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014458226s STEP: Saw pod success Aug 12 12:02:56.744: INFO: Pod "pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:02:56.746: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 12 12:02:56.785: INFO: Waiting for pod pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c to disappear Aug 12 12:02:56.792: INFO: Pod pod-projected-secrets-bec5f373-dc93-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:02:56.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fspmw" for this suite. Aug 12 12:03:02.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:03:02.881: INFO: namespace: e2e-tests-projected-fspmw, resource: bindings, ignored listing per whitelist Aug 12 12:03:02.902: INFO: namespace e2e-tests-projected-fspmw deletion completed in 6.107529649s • [SLOW TEST:12.317 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:03:02.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c61aa832-dc93-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 12:03:03.090: INFO: Waiting up to 5m0s for pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-hdpzd" to be "success or failure" Aug 12 12:03:03.098: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19232ms Aug 12 12:03:05.102: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012376932s Aug 12 12:03:07.106: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016470719s Aug 12 12:03:09.629: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 6.53949981s Aug 12 12:03:11.633: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.543126541s STEP: Saw pod success Aug 12 12:03:11.633: INFO: Pod "pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:03:11.636: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 12:03:11.677: INFO: Waiting for pod pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c to disappear Aug 12 12:03:11.703: INFO: Pod pod-configmaps-c62431a2-dc93-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:03:11.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hdpzd" for this suite. Aug 12 12:03:17.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:03:17.822: INFO: namespace: e2e-tests-configmap-hdpzd, resource: bindings, ignored listing per whitelist Aug 12 12:03:17.831: INFO: namespace e2e-tests-configmap-hdpzd deletion completed in 6.09237906s • [SLOW TEST:14.928 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:03:17.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 12 12:03:17.946: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:03:26.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-t668z" for this suite. Aug 12 12:03:48.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:03:48.108: INFO: namespace: e2e-tests-init-container-t668z, resource: bindings, ignored listing per whitelist Aug 12 12:03:48.116: INFO: namespace e2e-tests-init-container-t668z deletion completed in 22.070642166s • [SLOW TEST:30.285 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:03:48.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 12 12:03:49.793: INFO: Pod name wrapped-volume-race-e1f8b95a-dc93-11ea-9b9c-0242ac11000c: Found 0 pods out of 5 Aug 12 12:03:54.800: INFO: Pod name wrapped-volume-race-e1f8b95a-dc93-11ea-9b9c-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e1f8b95a-dc93-11ea-9b9c-0242ac11000c in namespace e2e-tests-emptydir-wrapper-lplxs, will wait for the garbage collector to delete the pods Aug 12 12:06:06.903: INFO: Deleting ReplicationController wrapped-volume-race-e1f8b95a-dc93-11ea-9b9c-0242ac11000c took: 8.22041ms Aug 12 12:06:07.003: INFO: Terminating ReplicationController wrapped-volume-race-e1f8b95a-dc93-11ea-9b9c-0242ac11000c pods took: 100.19213ms STEP: Creating RC which spawns configmap-volume pods Aug 12 12:06:48.658: INFO: Pod name wrapped-volume-race-4c90943d-dc94-11ea-9b9c-0242ac11000c: Found 0 pods out of 5 Aug 12 12:06:53.671: INFO: Pod name wrapped-volume-race-4c90943d-dc94-11ea-9b9c-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4c90943d-dc94-11ea-9b9c-0242ac11000c in namespace e2e-tests-emptydir-wrapper-lplxs, will wait for the garbage collector to delete the pods Aug 12 12:09:07.751: INFO: Deleting ReplicationController wrapped-volume-race-4c90943d-dc94-11ea-9b9c-0242ac11000c took: 6.722489ms Aug 12 12:09:07.852: INFO: Terminating ReplicationController wrapped-volume-race-4c90943d-dc94-11ea-9b9c-0242ac11000c pods took: 100.167692ms STEP: Creating RC which spawns configmap-volume pods Aug 12 12:09:48.684: INFO: Pod name wrapped-volume-race-b7e1d667-dc94-11ea-9b9c-0242ac11000c: Found 0 pods out of 5 Aug 12 12:09:53.693: INFO: Pod name wrapped-volume-race-b7e1d667-dc94-11ea-9b9c-0242ac11000c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b7e1d667-dc94-11ea-9b9c-0242ac11000c in namespace e2e-tests-emptydir-wrapper-lplxs, will wait for the garbage collector to delete the pods Aug 12 12:12:27.774: INFO: Deleting ReplicationController wrapped-volume-race-b7e1d667-dc94-11ea-9b9c-0242ac11000c took: 5.961987ms Aug 12 12:12:27.974: INFO: Terminating ReplicationController wrapped-volume-race-b7e1d667-dc94-11ea-9b9c-0242ac11000c pods took: 200.222181ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:13:18.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lplxs" for this suite. Aug 12 12:13:26.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:13:26.372: INFO: namespace: e2e-tests-emptydir-wrapper-lplxs, resource: bindings, ignored listing per whitelist Aug 12 12:13:26.380: INFO: namespace e2e-tests-emptydir-wrapper-lplxs deletion completed in 8.113712597s • [SLOW TEST:578.264 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:13:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 12:13:26.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-5t6s2" to be "success or failure" Aug 12 12:13:26.511: INFO: Pod "downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.484546ms Aug 12 12:13:28.608: INFO: Pod "downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14341741s Aug 12 12:13:30.611: INFO: Pod "downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146850667s STEP: Saw pod success Aug 12 12:13:30.611: INFO: Pod "downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:13:30.614: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 12:13:30.641: INFO: Waiting for pod downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c to disappear Aug 12 12:13:30.721: INFO: Pod downwardapi-volume-39b35429-dc95-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:13:30.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5t6s2" for this suite. Aug 12 12:13:36.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:13:36.843: INFO: namespace: e2e-tests-downward-api-5t6s2, resource: bindings, ignored listing per whitelist Aug 12 12:13:36.921: INFO: namespace e2e-tests-downward-api-5t6s2 deletion completed in 6.196754228s • [SLOW TEST:10.541 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:13:36.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:13:37.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8vhd5" for this suite. Aug 12 12:13:59.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:13:59.124: INFO: namespace: e2e-tests-pods-8vhd5, resource: bindings, ignored listing per whitelist Aug 12 12:13:59.197: INFO: namespace e2e-tests-pods-8vhd5 deletion completed in 22.110073407s • [SLOW TEST:22.276 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:13:59.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-fv74 STEP: Creating a pod to test atomic-volume-subpath Aug 12 12:13:59.364: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fv74" in namespace "e2e-tests-subpath-lqrzg" to be "success or failure" Aug 12 12:13:59.396: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Pending", Reason="", readiness=false. Elapsed: 32.106844ms Aug 12 12:14:01.441: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076966916s Aug 12 12:14:03.445: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081230722s Aug 12 12:14:05.449: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085164653s Aug 12 12:14:07.453: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088621798s Aug 12 12:14:09.724: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 10.359911829s Aug 12 12:14:11.729: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 12.364562097s Aug 12 12:14:13.733: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 14.369130492s Aug 12 12:14:15.737: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 16.373078336s Aug 12 12:14:17.740: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 18.376472404s Aug 12 12:14:19.745: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 20.380592683s Aug 12 12:14:21.748: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 22.384504259s Aug 12 12:14:23.753: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 24.389256223s Aug 12 12:14:25.758: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Running", Reason="", readiness=false. Elapsed: 26.394139562s Aug 12 12:14:27.762: INFO: Pod "pod-subpath-test-secret-fv74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.397776559s STEP: Saw pod success Aug 12 12:14:27.762: INFO: Pod "pod-subpath-test-secret-fv74" satisfied condition "success or failure" Aug 12 12:14:27.764: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-fv74 container test-container-subpath-secret-fv74: STEP: delete the pod Aug 12 12:14:27.784: INFO: Waiting for pod pod-subpath-test-secret-fv74 to disappear Aug 12 12:14:27.806: INFO: Pod pod-subpath-test-secret-fv74 no longer exists STEP: Deleting pod pod-subpath-test-secret-fv74 Aug 12 12:14:27.806: INFO: Deleting pod "pod-subpath-test-secret-fv74" in namespace "e2e-tests-subpath-lqrzg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:14:27.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lqrzg" for this suite. Aug 12 12:14:33.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:14:33.962: INFO: namespace: e2e-tests-subpath-lqrzg, resource: bindings, ignored listing per whitelist Aug 12 12:14:33.990: INFO: namespace e2e-tests-subpath-lqrzg deletion completed in 6.178847177s • [SLOW TEST:34.792 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:14:33.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-62024194-dc95-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 12:14:34.100: INFO: Waiting up to 5m0s for pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-cqwfv" to be "success or failure" Aug 12 12:14:34.111: INFO: Pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.279012ms Aug 12 12:14:36.115: INFO: Pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015353772s Aug 12 12:14:38.119: INFO: Pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019186994s Aug 12 12:14:40.123: INFO: Pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023494293s STEP: Saw pod success Aug 12 12:14:40.124: INFO: Pod "pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:14:40.126: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 12:14:40.276: INFO: Waiting for pod pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c to disappear Aug 12 12:14:40.352: INFO: Pod pod-configmaps-6202e83d-dc95-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:14:40.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cqwfv" for this suite. Aug 12 12:14:46.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:14:46.591: INFO: namespace: e2e-tests-configmap-cqwfv, resource: bindings, ignored listing per whitelist Aug 12 12:14:46.591: INFO: namespace e2e-tests-configmap-cqwfv deletion completed in 6.235174174s • [SLOW TEST:12.601 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:14:46.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-698788a3-dc95-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 12:14:46.719: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-9pml7" to be "success or failure" Aug 12 12:14:46.758: INFO: Pod "pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.91418ms Aug 12 12:14:48.810: INFO: Pod "pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090430845s Aug 12 12:14:50.815: INFO: Pod "pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095751181s STEP: Saw pod success Aug 12 12:14:50.815: INFO: Pod "pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:14:50.818: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: STEP: delete the pod Aug 12 12:14:51.019: INFO: Waiting for pod pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c to disappear Aug 12 12:14:51.059: INFO: Pod pod-projected-configmaps-698919f1-dc95-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:14:51.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9pml7" for this suite. Aug 12 12:14:57.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:14:57.106: INFO: namespace: e2e-tests-projected-9pml7, resource: bindings, ignored listing per whitelist Aug 12 12:14:57.152: INFO: namespace e2e-tests-projected-9pml7 deletion completed in 6.089856358s • [SLOW TEST:10.560 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:14:57.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-9g9dj I0812 12:14:57.279951 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-9g9dj, replica count: 1 I0812 12:14:58.330376 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0812 12:14:59.330592 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0812 12:15:00.330797 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 12 12:15:00.460: INFO: Created: latency-svc-rjw6j Aug 12 12:15:00.476: INFO: Got endpoints: latency-svc-rjw6j [45.520049ms] Aug 12 12:15:00.502: INFO: Created: latency-svc-nxs88 Aug 12 12:15:00.537: INFO: Got endpoints: latency-svc-nxs88 [60.446993ms] Aug 12 12:15:00.557: INFO: Created: latency-svc-6nrwm Aug 12 12:15:00.586: INFO: Got endpoints: latency-svc-6nrwm [109.700975ms] Aug 12 12:15:00.610: INFO: Created: latency-svc-85w64 Aug 12 12:15:00.622: INFO: Got endpoints: latency-svc-85w64 [145.763428ms] Aug 12 12:15:00.669: INFO: Created: latency-svc-5xdrj Aug 12 12:15:00.706: INFO: Got endpoints: latency-svc-5xdrj [229.716351ms] Aug 12 12:15:00.707: INFO: Created: latency-svc-s6px9 Aug 12 12:15:00.718: INFO: Got endpoints: latency-svc-s6px9 [242.037613ms] Aug 12 12:15:00.754: INFO: Created: latency-svc-ml8jx Aug 12 12:15:00.800: INFO: Got endpoints: latency-svc-ml8jx [323.964351ms] Aug 12 12:15:00.814: INFO: Created: latency-svc-llp5x Aug 12 12:15:00.863: INFO: Got endpoints: latency-svc-llp5x [386.344087ms] Aug 12 12:15:00.947: INFO: Created: latency-svc-d29t6 Aug 12 12:15:00.960: INFO: Got endpoints: latency-svc-d29t6 [483.617575ms] Aug 12 12:15:00.982: INFO: Created: latency-svc-v2hfz Aug 12 12:15:01.136: INFO: Got endpoints: latency-svc-v2hfz [659.081726ms] Aug 12 12:15:01.144: INFO: Created: latency-svc-qpsrb Aug 12 12:15:01.188: INFO: Got endpoints: latency-svc-qpsrb [711.473249ms] Aug 12 12:15:01.321: INFO: Created: latency-svc-zd8nv Aug 12 12:15:01.325: INFO: Got endpoints: latency-svc-zd8nv [848.688958ms] Aug 12 12:15:01.467: INFO: Created: latency-svc-rhxk6 Aug 12 12:15:01.476: INFO: Got endpoints: latency-svc-rhxk6 [999.211038ms] Aug 12 12:15:01.498: INFO: Created: latency-svc-wgjql Aug 12 12:15:01.501: INFO: Got endpoints: latency-svc-wgjql [1.024370954s] Aug 12 12:15:01.528: INFO: Created: latency-svc-6b9pj Aug 12 12:15:01.548: INFO: Got endpoints: latency-svc-6b9pj [1.071792227s] Aug 12 12:15:01.606: INFO: Created: latency-svc-nxgg4 Aug 12 12:15:01.614: INFO: Got endpoints: latency-svc-nxgg4 [1.137590236s] Aug 12 12:15:01.654: INFO: Created: latency-svc-4bmp6 Aug 12 12:15:01.662: INFO: Got endpoints: latency-svc-4bmp6 [1.125490273s] Aug 12 12:15:01.690: INFO: Created: latency-svc-75dck Aug 12 12:15:01.716: INFO: Got endpoints: latency-svc-75dck [1.130280464s] Aug 12 12:15:01.756: INFO: Created: latency-svc-m6gtj Aug 12 12:15:01.765: INFO: Got endpoints: latency-svc-m6gtj [1.143467038s] Aug 12 12:15:01.799: INFO: Created: latency-svc-sjfhf Aug 12 12:15:01.807: INFO: Got endpoints: latency-svc-sjfhf [1.101059428s] Aug 12 12:15:01.854: INFO: Created: latency-svc-dfs24 Aug 12 12:15:01.857: INFO: Got endpoints: latency-svc-dfs24 [1.138914924s] Aug 12 12:15:01.912: INFO: Created: latency-svc-jq4rr Aug 12 12:15:01.928: INFO: Got endpoints: latency-svc-jq4rr [1.127676181s] Aug 12 12:15:01.948: INFO: Created: latency-svc-8jf9k Aug 12 12:15:01.986: INFO: Got endpoints: latency-svc-8jf9k [1.122756743s] Aug 12 12:15:02.003: INFO: Created: latency-svc-kzxs2 Aug 12 12:15:02.018: INFO: Got endpoints: latency-svc-kzxs2 [1.058065598s] Aug 12 12:15:02.038: INFO: Created: latency-svc-78gv8 Aug 12 12:15:02.074: INFO: Got endpoints: latency-svc-78gv8 [937.884329ms] Aug 12 12:15:02.148: INFO: Created: latency-svc-qzxbc Aug 12 12:15:02.150: INFO: Got endpoints: latency-svc-qzxbc [962.411335ms] Aug 12 12:15:02.189: INFO: Created: latency-svc-fnvtj Aug 12 12:15:02.199: INFO: Got endpoints: latency-svc-fnvtj [873.394177ms] Aug 12 12:15:02.235: INFO: Created: latency-svc-swst8 Aug 12 12:15:02.303: INFO: Got endpoints: latency-svc-swst8 [827.230764ms] Aug 12 12:15:02.309: INFO: Created: latency-svc-8gknk Aug 12 12:15:02.331: INFO: Got endpoints: latency-svc-8gknk [830.111322ms] Aug 12 12:15:02.374: INFO: Created: latency-svc-zhn96 Aug 12 12:15:02.386: INFO: Got endpoints: latency-svc-zhn96 [837.30369ms] Aug 12 12:15:02.465: INFO: Created: latency-svc-96cn5 Aug 12 12:15:02.468: INFO: Got endpoints: latency-svc-96cn5 [854.13645ms] Aug 12 12:15:02.506: INFO: Created: latency-svc-9kz2k Aug 12 12:15:02.518: INFO: Got endpoints: latency-svc-9kz2k [856.299851ms] Aug 12 12:15:02.554: INFO: Created: latency-svc-7z6kk Aug 12 12:15:02.621: INFO: Got endpoints: latency-svc-7z6kk [904.491763ms] Aug 12 12:15:02.644: INFO: Created: latency-svc-cxdr8 Aug 12 12:15:02.668: INFO: Got endpoints: latency-svc-cxdr8 [902.927523ms] Aug 12 12:15:02.686: INFO: Created: latency-svc-lwg9x Aug 12 12:15:02.698: INFO: Got endpoints: latency-svc-lwg9x [890.486821ms] Aug 12 12:15:02.716: INFO: Created: latency-svc-98h4w Aug 12 12:15:02.770: INFO: Got endpoints: latency-svc-98h4w [912.614365ms] Aug 12 12:15:02.788: INFO: Created: latency-svc-t7qj6 Aug 12 12:15:02.806: INFO: Got endpoints: latency-svc-t7qj6 [878.197476ms] Aug 12 12:15:02.848: INFO: Created: latency-svc-2chm5 Aug 12 12:15:02.951: INFO: Got endpoints: latency-svc-2chm5 [965.201991ms] Aug 12 12:15:02.955: INFO: Created: latency-svc-fz27w Aug 12 12:15:02.964: INFO: Got endpoints: latency-svc-fz27w [945.574616ms] Aug 12 12:15:03.004: INFO: Created: latency-svc-9c5zn Aug 12 12:15:03.012: INFO: Got endpoints: latency-svc-9c5zn [938.10481ms] Aug 12 12:15:03.034: INFO: Created: latency-svc-vj98q Aug 12 12:15:03.048: INFO: Got endpoints: latency-svc-vj98q [897.604806ms] Aug 12 12:15:03.125: INFO: Created: latency-svc-qqqm6 Aug 12 12:15:03.132: INFO: Got endpoints: latency-svc-qqqm6 [933.724295ms] Aug 12 12:15:03.171: INFO: Created: latency-svc-7ndq7 Aug 12 12:15:03.193: INFO: Got endpoints: latency-svc-7ndq7 [889.626295ms] Aug 12 12:15:03.287: INFO: Created: latency-svc-wvz55 Aug 12 12:15:03.290: INFO: Got endpoints: latency-svc-wvz55 [959.221346ms] Aug 12 12:15:03.328: INFO: Created: latency-svc-zmdrw Aug 12 12:15:03.345: INFO: Got endpoints: latency-svc-zmdrw [958.800617ms] Aug 12 12:15:03.372: INFO: Created: latency-svc-p5bhd Aug 12 12:15:03.459: INFO: Got endpoints: latency-svc-p5bhd [990.574285ms] Aug 12 12:15:03.462: INFO: Created: latency-svc-qrs4c Aug 12 12:15:03.475: INFO: Got endpoints: latency-svc-qrs4c [956.541957ms] Aug 12 12:15:03.520: INFO: Created: latency-svc-fqh4q Aug 12 12:15:03.536: INFO: Got endpoints: latency-svc-fqh4q [914.961016ms] Aug 12 12:15:03.556: INFO: Created: latency-svc-gcvf7 Aug 12 12:15:03.609: INFO: Got endpoints: latency-svc-gcvf7 [940.114488ms] Aug 12 12:15:03.621: INFO: Created: latency-svc-792d7 Aug 12 12:15:03.638: INFO: Got endpoints: latency-svc-792d7 [940.033599ms] Aug 12 12:15:03.664: INFO: Created: latency-svc-794qz Aug 12 12:15:03.680: INFO: Got endpoints: latency-svc-794qz [910.12055ms] Aug 12 12:15:03.789: INFO: Created: latency-svc-66lmw Aug 12 12:15:03.795: INFO: Got endpoints: latency-svc-66lmw [988.447888ms] Aug 12 12:15:03.839: INFO: Created: latency-svc-hbw4w Aug 12 12:15:03.855: INFO: Got endpoints: latency-svc-hbw4w [904.288889ms] Aug 12 12:15:03.874: INFO: Created: latency-svc-xt25d Aug 12 12:15:03.884: INFO: Got endpoints: latency-svc-xt25d [920.764183ms] Aug 12 12:15:03.932: INFO: Created: latency-svc-8mgz9 Aug 12 12:15:03.939: INFO: Got endpoints: latency-svc-8mgz9 [926.999874ms] Aug 12 12:15:03.958: INFO: Created: latency-svc-j9ppn Aug 12 12:15:03.968: INFO: Got endpoints: latency-svc-j9ppn [920.372301ms] Aug 12 12:15:03.987: INFO: Created: latency-svc-4gcqb Aug 12 12:15:04.018: INFO: Got endpoints: latency-svc-4gcqb [885.254099ms] Aug 12 12:15:04.083: INFO: Created: latency-svc-xskfh Aug 12 12:15:04.085: INFO: Got endpoints: latency-svc-xskfh [892.441076ms] Aug 12 12:15:04.126: INFO: Created: latency-svc-qbfk2 Aug 12 12:15:04.143: INFO: Got endpoints: latency-svc-qbfk2 [852.997554ms] Aug 12 12:15:04.168: INFO: Created: latency-svc-j9psk Aug 12 12:15:04.179: INFO: Got endpoints: latency-svc-j9psk [834.798144ms] Aug 12 12:15:04.232: INFO: Created: latency-svc-44dsd Aug 12 12:15:04.251: INFO: Got endpoints: latency-svc-44dsd [792.217846ms] Aug 12 12:15:04.276: INFO: Created: latency-svc-wg8s4 Aug 12 12:15:04.288: INFO: Got endpoints: latency-svc-wg8s4 [812.62201ms] Aug 12 12:15:04.306: INFO: Created: latency-svc-wpszm Aug 12 12:15:04.329: INFO: Got endpoints: latency-svc-wpszm [793.158856ms] Aug 12 12:15:04.394: INFO: Created: latency-svc-59z5z Aug 12 12:15:04.403: INFO: Got endpoints: latency-svc-59z5z [794.036641ms] Aug 12 12:15:04.438: INFO: Created: latency-svc-7zf44 Aug 12 12:15:04.450: INFO: Got endpoints: latency-svc-7zf44 [812.441724ms] Aug 12 12:15:04.473: INFO: Created: latency-svc-vpsrk Aug 12 12:15:04.493: INFO: Got endpoints: latency-svc-vpsrk [812.50995ms] Aug 12 12:15:04.549: INFO: Created: latency-svc-c4hqw Aug 12 12:15:04.559: INFO: Got endpoints: latency-svc-c4hqw [764.039613ms] Aug 12 12:15:04.587: INFO: Created: latency-svc-2s4hf Aug 12 12:15:04.602: INFO: Got endpoints: latency-svc-2s4hf [746.524944ms] Aug 12 12:15:04.630: INFO: Created: latency-svc-twtdz Aug 12 12:15:04.693: INFO: Got endpoints: latency-svc-twtdz [808.166504ms] Aug 12 12:15:04.720: INFO: Created: latency-svc-tkdln Aug 12 12:15:04.728: INFO: Got endpoints: latency-svc-tkdln [788.756179ms] Aug 12 12:15:04.750: INFO: Created: latency-svc-29njw Aug 12 12:15:04.777: INFO: Got endpoints: latency-svc-29njw [808.217375ms] Aug 12 12:15:04.897: INFO: Created: latency-svc-jttct Aug 12 12:15:04.900: INFO: Got endpoints: latency-svc-jttct [882.187475ms] Aug 12 12:15:04.942: INFO: Created: latency-svc-bw9qp Aug 12 12:15:04.945: INFO: Got endpoints: latency-svc-bw9qp [860.220354ms] Aug 12 12:15:05.046: INFO: Created: latency-svc-kmg8d Aug 12 12:15:05.068: INFO: Got endpoints: latency-svc-kmg8d [924.104963ms] Aug 12 12:15:05.134: INFO: Created: latency-svc-zddqx Aug 12 12:15:05.178: INFO: Got endpoints: latency-svc-zddqx [998.085602ms] Aug 12 12:15:05.205: INFO: Created: latency-svc-t5sgn Aug 12 12:15:05.215: INFO: Got endpoints: latency-svc-t5sgn [963.513061ms] Aug 12 12:15:05.235: INFO: Created: latency-svc-4ppbl Aug 12 12:15:05.246: INFO: Got endpoints: latency-svc-4ppbl [957.714612ms] Aug 12 12:15:05.341: INFO: Created: latency-svc-qwz9l Aug 12 12:15:05.345: INFO: Got endpoints: latency-svc-qwz9l [1.016130974s] Aug 12 12:15:05.379: INFO: Created: latency-svc-jdbss Aug 12 12:15:05.396: INFO: Got endpoints: latency-svc-jdbss [992.782014ms] Aug 12 12:15:05.424: INFO: Created: latency-svc-ksprk Aug 12 12:15:05.483: INFO: Got endpoints: latency-svc-ksprk [1.032683013s] Aug 12 12:15:05.512: INFO: Created: latency-svc-8tv48 Aug 12 12:15:05.516: INFO: Got endpoints: latency-svc-8tv48 [1.022638356s] Aug 12 12:15:05.553: INFO: Created: latency-svc-4c5c6 Aug 12 12:15:05.564: INFO: Got endpoints: latency-svc-4c5c6 [1.005032953s] Aug 12 12:15:05.583: INFO: Created: latency-svc-l552v Aug 12 12:15:05.656: INFO: Got endpoints: latency-svc-l552v [1.054539673s] Aug 12 12:15:05.660: INFO: Created: latency-svc-m7h27 Aug 12 12:15:05.666: INFO: Got endpoints: latency-svc-m7h27 [973.661728ms] Aug 12 12:15:05.704: INFO: Created: latency-svc-jm7hg Aug 12 12:15:05.715: INFO: Got endpoints: latency-svc-jm7hg [987.100584ms] Aug 12 12:15:05.807: INFO: Created: latency-svc-f27wc Aug 12 12:15:05.810: INFO: Got endpoints: latency-svc-f27wc [143.094537ms] Aug 12 12:15:05.847: INFO: Created: latency-svc-n8trg Aug 12 12:15:05.872: INFO: Got endpoints: latency-svc-n8trg [1.095181012s] Aug 12 12:15:05.904: INFO: Created: latency-svc-zz9vh Aug 12 12:15:05.944: INFO: Got endpoints: latency-svc-zz9vh [1.043947822s] Aug 12 12:15:05.961: INFO: Created: latency-svc-7pmxb Aug 12 12:15:05.974: INFO: Got endpoints: latency-svc-7pmxb [1.028608483s] Aug 12 12:15:05.998: INFO: Created: latency-svc-l2nd6 Aug 12 12:15:06.010: INFO: Got endpoints: latency-svc-l2nd6 [942.675732ms] Aug 12 12:15:06.039: INFO: Created: latency-svc-4r7w2 Aug 12 12:15:06.082: INFO: Got endpoints: latency-svc-4r7w2 [904.182421ms] Aug 12 12:15:06.094: INFO: Created: latency-svc-s5vqz Aug 12 12:15:06.122: INFO: Got endpoints: latency-svc-s5vqz [907.371258ms] Aug 12 12:15:06.164: INFO: Created: latency-svc-4mc8m Aug 12 12:15:06.171: INFO: Got endpoints: latency-svc-4mc8m [925.011344ms] Aug 12 12:15:06.226: INFO: Created: latency-svc-vgbd5 Aug 12 12:15:06.229: INFO: Got endpoints: latency-svc-vgbd5 [883.452837ms] Aug 12 12:15:06.273: INFO: Created: latency-svc-hgdw5 Aug 12 12:15:06.300: INFO: Got endpoints: latency-svc-hgdw5 [904.061983ms] Aug 12 12:15:06.388: INFO: Created: latency-svc-g5cdq Aug 12 12:15:06.391: INFO: Got endpoints: latency-svc-g5cdq [907.365081ms] Aug 12 12:15:06.453: INFO: Created: latency-svc-nhrd6 Aug 12 12:15:06.483: INFO: Got endpoints: latency-svc-nhrd6 [967.002935ms] Aug 12 12:15:06.549: INFO: Created: latency-svc-9gxxr Aug 12 12:15:06.559: INFO: Got endpoints: latency-svc-9gxxr [995.025865ms] Aug 12 12:15:06.591: INFO: Created: latency-svc-jb7lc Aug 12 12:15:06.602: INFO: Got endpoints: latency-svc-jb7lc [945.19723ms] Aug 12 12:15:06.620: INFO: Created: latency-svc-4tfd5 Aug 12 12:15:06.632: INFO: Got endpoints: latency-svc-4tfd5 [916.646269ms] Aug 12 12:15:06.699: INFO: Created: latency-svc-pkqdj Aug 12 12:15:06.701: INFO: Got endpoints: latency-svc-pkqdj [891.282449ms] Aug 12 12:15:06.765: INFO: Created: latency-svc-qchs8 Aug 12 12:15:06.782: INFO: Got endpoints: latency-svc-qchs8 [909.977735ms] Aug 12 12:15:06.861: INFO: Created: latency-svc-cqhtm Aug 12 12:15:06.867: INFO: Got endpoints: latency-svc-cqhtm [922.807915ms] Aug 12 12:15:06.920: INFO: Created: latency-svc-4ph4k Aug 12 12:15:06.932: INFO: Got endpoints: latency-svc-4ph4k [958.314243ms] Aug 12 12:15:07.017: INFO: Created: latency-svc-4zpv7 Aug 12 12:15:07.020: INFO: Got endpoints: latency-svc-4zpv7 [1.009170285s] Aug 12 12:15:07.076: INFO: Created: latency-svc-4zhz7 Aug 12 12:15:07.113: INFO: Got endpoints: latency-svc-4zhz7 [1.031452529s] Aug 12 12:15:07.184: INFO: Created: latency-svc-kfk74 Aug 12 12:15:07.187: INFO: Got endpoints: latency-svc-kfk74 [1.06457654s] Aug 12 12:15:07.238: INFO: Created: latency-svc-nfsfb Aug 12 12:15:07.274: INFO: Got endpoints: latency-svc-nfsfb [1.103203207s] Aug 12 12:15:07.345: INFO: Created: latency-svc-tb2xk Aug 12 12:15:07.353: INFO: Got endpoints: latency-svc-tb2xk [1.124630595s] Aug 12 12:15:07.390: INFO: Created: latency-svc-v2vmv Aug 12 12:15:07.401: INFO: Got endpoints: latency-svc-v2vmv [1.101775282s] Aug 12 12:15:07.418: INFO: Created: latency-svc-wqgpb Aug 12 12:15:07.432: INFO: Got endpoints: latency-svc-wqgpb [1.041142141s] Aug 12 12:15:07.490: INFO: Created: latency-svc-d2trm Aug 12 12:15:07.492: INFO: Got endpoints: latency-svc-d2trm [1.009639086s] Aug 12 12:15:07.538: INFO: Created: latency-svc-v76hf Aug 12 12:15:07.565: INFO: Got endpoints: latency-svc-v76hf [1.005318542s] Aug 12 12:15:07.645: INFO: Created: latency-svc-4prtl Aug 12 12:15:07.649: INFO: Got endpoints: latency-svc-4prtl [1.047143314s] Aug 12 12:15:07.712: INFO: Created: latency-svc-r9lsr Aug 12 12:15:07.740: INFO: Got endpoints: latency-svc-r9lsr [1.107970696s] Aug 12 12:15:07.800: INFO: Created: latency-svc-b8b6d Aug 12 12:15:07.817: INFO: Got endpoints: latency-svc-b8b6d [1.116423539s] Aug 12 12:15:07.856: INFO: Created: latency-svc-9t8xx Aug 12 12:15:07.883: INFO: Got endpoints: latency-svc-9t8xx [1.101259446s] Aug 12 12:15:07.963: INFO: Created: latency-svc-6fn4d Aug 12 12:15:07.965: INFO: Got endpoints: latency-svc-6fn4d [1.098024877s] Aug 12 12:15:07.995: INFO: Created: latency-svc-pzggq Aug 12 12:15:08.018: INFO: Got endpoints: latency-svc-pzggq [1.0852479s] Aug 12 12:15:08.043: INFO: Created: latency-svc-hxmmp Aug 12 12:15:08.112: INFO: Got endpoints: latency-svc-hxmmp [1.09277202s] Aug 12 12:15:08.120: INFO: Created: latency-svc-5w9lh Aug 12 12:15:08.142: INFO: Got endpoints: latency-svc-5w9lh [1.028667445s] Aug 12 12:15:08.168: INFO: Created: latency-svc-vqkw4 Aug 12 12:15:08.185: INFO: Got endpoints: latency-svc-vqkw4 [997.597019ms] Aug 12 12:15:08.204: INFO: Created: latency-svc-8nmzn Aug 12 12:15:08.267: INFO: Got endpoints: latency-svc-8nmzn [993.355013ms] Aug 12 12:15:08.271: INFO: Created: latency-svc-skjlz Aug 12 12:15:08.287: INFO: Got endpoints: latency-svc-skjlz [933.678746ms] Aug 12 12:15:08.306: INFO: Created: latency-svc-ktcs6 Aug 12 12:15:08.330: INFO: Got endpoints: latency-svc-ktcs6 [928.201462ms] Aug 12 12:15:08.360: INFO: Created: latency-svc-g2cv2 Aug 12 12:15:08.435: INFO: Got endpoints: latency-svc-g2cv2 [1.003301955s] Aug 12 12:15:08.437: INFO: Created: latency-svc-t56dx Aug 12 12:15:08.450: INFO: Got endpoints: latency-svc-t56dx [957.108154ms] Aug 12 12:15:08.468: INFO: Created: latency-svc-9tlt4 Aug 12 12:15:08.480: INFO: Got endpoints: latency-svc-9tlt4 [915.299477ms] Aug 12 12:15:08.504: INFO: Created: latency-svc-2bdw4 Aug 12 12:15:08.516: INFO: Got endpoints: latency-svc-2bdw4 [867.180069ms] Aug 12 12:15:08.586: INFO: Created: latency-svc-bsjbs Aug 12 12:15:08.588: INFO: Got endpoints: latency-svc-bsjbs [848.401913ms] Aug 12 12:15:08.624: INFO: Created: latency-svc-gg27p Aug 12 12:15:08.660: INFO: Got endpoints: latency-svc-gg27p [842.380651ms] Aug 12 12:15:08.734: INFO: Created: latency-svc-h4x7s Aug 12 12:15:08.745: INFO: Got endpoints: latency-svc-h4x7s [861.426863ms] Aug 12 12:15:08.779: INFO: Created: latency-svc-vxh56 Aug 12 12:15:08.800: INFO: Got endpoints: latency-svc-vxh56 [834.532627ms] Aug 12 12:15:08.898: INFO: Created: latency-svc-92hrt Aug 12 12:15:08.900: INFO: Got endpoints: latency-svc-92hrt [882.309234ms] Aug 12 12:15:08.949: INFO: Created: latency-svc-l6l5h Aug 12 12:15:08.962: INFO: Got endpoints: latency-svc-l6l5h [849.264388ms] Aug 12 12:15:08.990: INFO: Created: latency-svc-hflxp Aug 12 12:15:09.053: INFO: Got endpoints: latency-svc-hflxp [910.499629ms] Aug 12 12:15:09.056: INFO: Created: latency-svc-pbndm Aug 12 12:15:09.070: INFO: Got endpoints: latency-svc-pbndm [885.479361ms] Aug 12 12:15:09.098: INFO: Created: latency-svc-2mqt8 Aug 12 12:15:09.125: INFO: Got endpoints: latency-svc-2mqt8 [857.300842ms] Aug 12 12:15:09.145: INFO: Created: latency-svc-kj29g Aug 12 12:15:09.195: INFO: Got endpoints: latency-svc-kj29g [907.985651ms] Aug 12 12:15:09.199: INFO: Created: latency-svc-p8pgx Aug 12 12:15:09.209: INFO: Got endpoints: latency-svc-p8pgx [879.056015ms] Aug 12 12:15:09.230: INFO: Created: latency-svc-jckhc Aug 12 12:15:09.239: INFO: Got endpoints: latency-svc-jckhc [804.279606ms] Aug 12 12:15:09.260: INFO: Created: latency-svc-w52hj Aug 12 12:15:09.276: INFO: Got endpoints: latency-svc-w52hj [826.300237ms] Aug 12 12:15:09.296: INFO: Created: latency-svc-vmxgx Aug 12 12:15:09.357: INFO: Got endpoints: latency-svc-vmxgx [877.425019ms] Aug 12 12:15:09.403: INFO: Created: latency-svc-w79zm Aug 12 12:15:09.420: INFO: Got endpoints: latency-svc-w79zm [903.702153ms] Aug 12 12:15:09.439: INFO: Created: latency-svc-vvwlt Aug 12 12:15:09.450: INFO: Got endpoints: latency-svc-vvwlt [861.862992ms] Aug 12 12:15:09.508: INFO: Created: latency-svc-wwg78 Aug 12 12:15:09.529: INFO: Got endpoints: latency-svc-wwg78 [869.194436ms] Aug 12 12:15:09.561: INFO: Created: latency-svc-v6k57 Aug 12 12:15:09.577: INFO: Got endpoints: latency-svc-v6k57 [831.754733ms] Aug 12 12:15:09.676: INFO: Created: latency-svc-pwdpm Aug 12 12:15:09.716: INFO: Got endpoints: latency-svc-pwdpm [916.114674ms] Aug 12 12:15:09.758: INFO: Created: latency-svc-7tztx Aug 12 12:15:09.849: INFO: Got endpoints: latency-svc-7tztx [948.340882ms] Aug 12 12:15:09.852: INFO: Created: latency-svc-9g7n4 Aug 12 12:15:09.867: INFO: Got endpoints: latency-svc-9g7n4 [904.969615ms] Aug 12 12:15:09.902: INFO: Created: latency-svc-kkwtl Aug 12 12:15:09.914: INFO: Got endpoints: latency-svc-kkwtl [861.157352ms] Aug 12 12:15:09.931: INFO: Created: latency-svc-dbdzf Aug 12 12:15:09.944: INFO: Got endpoints: latency-svc-dbdzf [873.725928ms] Aug 12 12:15:10.010: INFO: Created: latency-svc-8twnx Aug 12 12:15:10.023: INFO: Got endpoints: latency-svc-8twnx [898.406139ms] Aug 12 12:15:10.041: INFO: Created: latency-svc-qzh4l Aug 12 12:15:10.069: INFO: Got endpoints: latency-svc-qzh4l [874.014786ms] Aug 12 12:15:10.094: INFO: Created: latency-svc-qft6t Aug 12 12:15:10.107: INFO: Got endpoints: latency-svc-qft6t [897.804332ms] Aug 12 12:15:10.166: INFO: Created: latency-svc-mrqmn Aug 12 12:15:10.173: INFO: Got endpoints: latency-svc-mrqmn [933.57708ms] Aug 12 12:15:10.195: INFO: Created: latency-svc-qwsnk Aug 12 12:15:10.215: INFO: Got endpoints: latency-svc-qwsnk [939.206734ms] Aug 12 12:15:10.250: INFO: Created: latency-svc-s4lwn Aug 12 12:15:10.309: INFO: Got endpoints: latency-svc-s4lwn [951.712055ms] Aug 12 12:15:10.330: INFO: Created: latency-svc-2d6rj Aug 12 12:15:10.342: INFO: Got endpoints: latency-svc-2d6rj [921.619281ms] Aug 12 12:15:10.381: INFO: Created: latency-svc-79jkp Aug 12 12:15:10.391: INFO: Got endpoints: latency-svc-79jkp [941.45795ms] Aug 12 12:15:10.520: INFO: Created: latency-svc-qsmrh Aug 12 12:15:10.581: INFO: Created: latency-svc-sh4h9 Aug 12 12:15:10.615: INFO: Got endpoints: latency-svc-qsmrh [1.086374327s] Aug 12 12:15:10.617: INFO: Created: latency-svc-gpmbc Aug 12 12:15:10.669: INFO: Got endpoints: latency-svc-gpmbc [953.253822ms] Aug 12 12:15:10.670: INFO: Got endpoints: latency-svc-sh4h9 [1.093585091s] Aug 12 12:15:10.755: INFO: Created: latency-svc-59zht Aug 12 12:15:10.848: INFO: Got endpoints: latency-svc-59zht [999.429001ms] Aug 12 12:15:10.851: INFO: Created: latency-svc-n96kp Aug 12 12:15:10.865: INFO: Got endpoints: latency-svc-n96kp [998.024287ms] Aug 12 12:15:10.922: INFO: Created: latency-svc-dhbmg Aug 12 12:15:11.010: INFO: Got endpoints: latency-svc-dhbmg [1.096047633s] Aug 12 12:15:11.012: INFO: Created: latency-svc-t2ntc Aug 12 12:15:11.033: INFO: Got endpoints: latency-svc-t2ntc [1.089372841s] Aug 12 12:15:11.107: INFO: Created: latency-svc-wmsjh Aug 12 12:15:11.213: INFO: Got endpoints: latency-svc-wmsjh [1.190103028s] Aug 12 12:15:11.221: INFO: Created: latency-svc-cl7qk Aug 12 12:15:11.263: INFO: Got endpoints: latency-svc-cl7qk [1.193122237s] Aug 12 12:15:11.364: INFO: Created: latency-svc-xpzmb Aug 12 12:15:11.366: INFO: Got endpoints: latency-svc-xpzmb [1.259083707s] Aug 12 12:15:11.508: INFO: Created: latency-svc-zbg28 Aug 12 12:15:11.514: INFO: Got endpoints: latency-svc-zbg28 [1.340484998s] Aug 12 12:15:11.552: INFO: Created: latency-svc-fdnx7 Aug 12 12:15:11.593: INFO: Got endpoints: latency-svc-fdnx7 [1.378174604s] Aug 12 12:15:11.651: INFO: Created: latency-svc-9ksxp Aug 12 12:15:11.654: INFO: Got endpoints: latency-svc-9ksxp [1.344686111s] Aug 12 12:15:11.720: INFO: Created: latency-svc-6pqtr Aug 12 12:15:11.735: INFO: Got endpoints: latency-svc-6pqtr [1.393226179s] Aug 12 12:15:11.801: INFO: Created: latency-svc-bp7mb Aug 12 12:15:11.804: INFO: Got endpoints: latency-svc-bp7mb [1.412130941s] Aug 12 12:15:11.852: INFO: Created: latency-svc-tnf6d Aug 12 12:15:11.861: INFO: Got endpoints: latency-svc-tnf6d [1.245127347s] Aug 12 12:15:11.882: INFO: Created: latency-svc-lnqff Aug 12 12:15:11.944: INFO: Got endpoints: latency-svc-lnqff [1.274870343s] Aug 12 12:15:11.962: INFO: Created: latency-svc-v4ds6 Aug 12 12:15:11.976: INFO: Got endpoints: latency-svc-v4ds6 [1.305688412s] Aug 12 12:15:12.020: INFO: Created: latency-svc-s8cdc Aug 12 12:15:12.036: INFO: Got endpoints: latency-svc-s8cdc [1.187661848s] Aug 12 12:15:12.088: INFO: Created: latency-svc-8sbg5 Aug 12 12:15:12.091: INFO: Got endpoints: latency-svc-8sbg5 [1.225839165s] Aug 12 12:15:12.117: INFO: Created: latency-svc-dtbp6 Aug 12 12:15:12.169: INFO: Got endpoints: latency-svc-dtbp6 [1.159421345s] Aug 12 12:15:12.226: INFO: Created: latency-svc-d94vg Aug 12 12:15:12.248: INFO: Got endpoints: latency-svc-d94vg [1.214370909s] Aug 12 12:15:12.248: INFO: Created: latency-svc-c4k9b Aug 12 12:15:12.259: INFO: Got endpoints: latency-svc-c4k9b [1.045514908s] Aug 12 12:15:12.277: INFO: Created: latency-svc-crb22 Aug 12 12:15:12.289: INFO: Got endpoints: latency-svc-crb22 [1.026077072s] Aug 12 12:15:12.314: INFO: Created: latency-svc-4x5m6 Aug 12 12:15:12.325: INFO: Got endpoints: latency-svc-4x5m6 [959.473079ms] Aug 12 12:15:12.375: INFO: Created: latency-svc-276z2 Aug 12 12:15:12.392: INFO: Got endpoints: latency-svc-276z2 [877.868844ms] Aug 12 12:15:12.440: INFO: Created: latency-svc-nx4mw Aug 12 12:15:12.458: INFO: Got endpoints: latency-svc-nx4mw [864.283274ms] Aug 12 12:15:12.531: INFO: Created: latency-svc-mddm5 Aug 12 12:15:12.534: INFO: Got endpoints: latency-svc-mddm5 [880.428502ms] Aug 12 12:15:12.560: INFO: Created: latency-svc-qvjbh Aug 12 12:15:12.573: INFO: Got endpoints: latency-svc-qvjbh [838.555109ms] Aug 12 12:15:12.589: INFO: Created: latency-svc-snsqf Aug 12 12:15:12.613: INFO: Got endpoints: latency-svc-snsqf [809.536748ms] Aug 12 12:15:12.682: INFO: Created: latency-svc-2ffz8 Aug 12 12:15:12.727: INFO: Created: latency-svc-dkdnv Aug 12 12:15:12.728: INFO: Got endpoints: latency-svc-2ffz8 [866.92909ms] Aug 12 12:15:12.747: INFO: Got endpoints: latency-svc-dkdnv [803.129302ms] Aug 12 12:15:12.776: INFO: Created: latency-svc-nctgc Aug 12 12:15:12.824: INFO: Got endpoints: latency-svc-nctgc [848.305432ms] Aug 12 12:15:12.826: INFO: Created: latency-svc-lfmv7 Aug 12 12:15:12.843: INFO: Got endpoints: latency-svc-lfmv7 [807.254557ms] Aug 12 12:15:12.963: INFO: Created: latency-svc-72nzh Aug 12 12:15:12.965: INFO: Got endpoints: latency-svc-72nzh [873.975655ms] Aug 12 12:15:12.991: INFO: Created: latency-svc-vkt84 Aug 12 12:15:13.018: INFO: Got endpoints: latency-svc-vkt84 [848.899367ms] Aug 12 12:15:13.040: INFO: Created: latency-svc-8zml5 Aug 12 12:15:13.124: INFO: Got endpoints: latency-svc-8zml5 [875.887397ms] Aug 12 12:15:13.126: INFO: Created: latency-svc-7pbhk Aug 12 12:15:13.163: INFO: Got endpoints: latency-svc-7pbhk [903.793537ms] Aug 12 12:15:13.202: INFO: Created: latency-svc-x4qvw Aug 12 12:15:13.217: INFO: Got endpoints: latency-svc-x4qvw [928.112861ms] Aug 12 12:15:13.273: INFO: Created: latency-svc-6s7vn Aug 12 12:15:13.276: INFO: Got endpoints: latency-svc-6s7vn [950.176933ms] Aug 12 12:15:13.303: INFO: Created: latency-svc-slv8v Aug 12 12:15:13.333: INFO: Got endpoints: latency-svc-slv8v [941.52519ms] Aug 12 12:15:13.333: INFO: Latencies: [60.446993ms 109.700975ms 143.094537ms 145.763428ms 229.716351ms 242.037613ms 323.964351ms 386.344087ms 483.617575ms 659.081726ms 711.473249ms 746.524944ms 764.039613ms 788.756179ms 792.217846ms 793.158856ms 794.036641ms 803.129302ms 804.279606ms 807.254557ms 808.166504ms 808.217375ms 809.536748ms 812.441724ms 812.50995ms 812.62201ms 826.300237ms 827.230764ms 830.111322ms 831.754733ms 834.532627ms 834.798144ms 837.30369ms 838.555109ms 842.380651ms 848.305432ms 848.401913ms 848.688958ms 848.899367ms 849.264388ms 852.997554ms 854.13645ms 856.299851ms 857.300842ms 860.220354ms 861.157352ms 861.426863ms 861.862992ms 864.283274ms 866.92909ms 867.180069ms 869.194436ms 873.394177ms 873.725928ms 873.975655ms 874.014786ms 875.887397ms 877.425019ms 877.868844ms 878.197476ms 879.056015ms 880.428502ms 882.187475ms 882.309234ms 883.452837ms 885.254099ms 885.479361ms 889.626295ms 890.486821ms 891.282449ms 892.441076ms 897.604806ms 897.804332ms 898.406139ms 902.927523ms 903.702153ms 903.793537ms 904.061983ms 904.182421ms 904.288889ms 904.491763ms 904.969615ms 907.365081ms 907.371258ms 907.985651ms 909.977735ms 910.12055ms 910.499629ms 912.614365ms 914.961016ms 915.299477ms 916.114674ms 916.646269ms 920.372301ms 920.764183ms 921.619281ms 922.807915ms 924.104963ms 925.011344ms 926.999874ms 928.112861ms 928.201462ms 933.57708ms 933.678746ms 933.724295ms 937.884329ms 938.10481ms 939.206734ms 940.033599ms 940.114488ms 941.45795ms 941.52519ms 942.675732ms 945.19723ms 945.574616ms 948.340882ms 950.176933ms 951.712055ms 953.253822ms 956.541957ms 957.108154ms 957.714612ms 958.314243ms 958.800617ms 959.221346ms 959.473079ms 962.411335ms 963.513061ms 965.201991ms 967.002935ms 973.661728ms 987.100584ms 988.447888ms 990.574285ms 992.782014ms 993.355013ms 995.025865ms 997.597019ms 998.024287ms 998.085602ms 999.211038ms 999.429001ms 1.003301955s 1.005032953s 1.005318542s 1.009170285s 1.009639086s 1.016130974s 1.022638356s 1.024370954s 1.026077072s 1.028608483s 1.028667445s 1.031452529s 1.032683013s 1.041142141s 1.043947822s 1.045514908s 1.047143314s 1.054539673s 1.058065598s 1.06457654s 1.071792227s 1.0852479s 1.086374327s 1.089372841s 1.09277202s 1.093585091s 1.095181012s 1.096047633s 1.098024877s 1.101059428s 1.101259446s 1.101775282s 1.103203207s 1.107970696s 1.116423539s 1.122756743s 1.124630595s 1.125490273s 1.127676181s 1.130280464s 1.137590236s 1.138914924s 1.143467038s 1.159421345s 1.187661848s 1.190103028s 1.193122237s 1.214370909s 1.225839165s 1.245127347s 1.259083707s 1.274870343s 1.305688412s 1.340484998s 1.344686111s 1.378174604s 1.393226179s 1.412130941s] Aug 12 12:15:13.334: INFO: 50 %ile: 928.112861ms Aug 12 12:15:13.334: INFO: 90 %ile: 1.127676181s Aug 12 12:15:13.334: INFO: 99 %ile: 1.393226179s Aug 12 12:15:13.334: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:15:13.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-9g9dj" for this suite. Aug 12 12:15:51.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:15:51.384: INFO: namespace: e2e-tests-svc-latency-9g9dj, resource: bindings, ignored listing per whitelist Aug 12 12:15:51.423: INFO: namespace e2e-tests-svc-latency-9g9dj deletion completed in 38.071721441s • [SLOW TEST:54.270 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:15:51.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-9qcz8 Aug 12 12:15:56.105: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-9qcz8 STEP: checking the pod's current state and verifying that restartCount is present Aug 12 12:15:56.108: INFO: Initial restart count of pod liveness-exec is 0 Aug 12 12:16:44.510: INFO: Restart count of pod e2e-tests-container-probe-9qcz8/liveness-exec is now 1 (48.401491923s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:16:44.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9qcz8" for this suite. Aug 12 12:16:50.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:16:50.679: INFO: namespace: e2e-tests-container-probe-9qcz8, resource: bindings, ignored listing per whitelist Aug 12 12:16:50.702: INFO: namespace e2e-tests-container-probe-9qcz8 deletion completed in 6.120543386s • [SLOW TEST:59.279 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:16:50.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 12 12:16:56.839: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-b381121a-dc95-11ea-9b9c-0242ac11000c", GenerateName:"", Namespace:"e2e-tests-pods-wvptv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-wvptv/pods/pod-submit-remove-b381121a-dc95-11ea-9b9c-0242ac11000c", UID:"b3829f8e-dc95-11ea-b2c9-0242ac120008", ResourceVersion:"5905068", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732831410, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"803982144"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wwxdm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bcfec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wwxdm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cdb168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001aafb60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cdb980)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cdb9b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001cdb9b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cdb9bc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732831410, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732831415, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732831415, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732831410, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.173", StartTime:(*v1.Time)(0xc0017ac7c0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017ac800), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://6aca7b72d85c385ba7db7aca41baa9012995b28c813f8e3d87947a597a47e560"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Aug 12 12:17:01.852: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:17:01.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wvptv" for this suite. Aug 12 12:17:07.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:17:07.898: INFO: namespace: e2e-tests-pods-wvptv, resource: bindings, ignored listing per whitelist Aug 12 12:17:07.956: INFO: namespace e2e-tests-pods-wvptv deletion completed in 6.096782885s • [SLOW TEST:17.254 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:17:07.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:17:08.066: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:17:12.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5v5kw" for this suite. Aug 12 12:18:02.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:18:02.286: INFO: namespace: e2e-tests-pods-5v5kw, resource: bindings, ignored listing per whitelist Aug 12 12:18:02.338: INFO: namespace e2e-tests-pods-5v5kw deletion completed in 50.119151809s • [SLOW TEST:54.381 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:18:02.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 12 12:18:02.498: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c5j9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-c5j9w/configmaps/e2e-watch-test-resource-version,UID:de36ee47-dc95-11ea-b2c9-0242ac120008,ResourceVersion:5905243,Generation:0,CreationTimestamp:2020-08-12 12:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 12 12:18:02.498: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-c5j9w,SelfLink:/api/v1/namespaces/e2e-tests-watch-c5j9w/configmaps/e2e-watch-test-resource-version,UID:de36ee47-dc95-11ea-b2c9-0242ac120008,ResourceVersion:5905244,Generation:0,CreationTimestamp:2020-08-12 12:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:18:02.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-c5j9w" for this suite. Aug 12 12:18:08.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:18:08.584: INFO: namespace: e2e-tests-watch-c5j9w, resource: bindings, ignored listing per whitelist Aug 12 12:18:08.630: INFO: namespace e2e-tests-watch-c5j9w deletion completed in 6.126717791s • [SLOW TEST:6.291 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:18:08.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 12 12:18:08.731: INFO: Waiting up to 5m0s for pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-9rmth" to be "success or failure" Aug 12 12:18:08.735: INFO: Pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.921375ms Aug 12 12:18:10.840: INFO: Pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109051732s Aug 12 12:18:12.844: INFO: Pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.113023407s Aug 12 12:18:14.848: INFO: Pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117374766s STEP: Saw pod success Aug 12 12:18:14.848: INFO: Pod "pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:18:14.851: INFO: Trying to get logs from node hunter-worker2 pod pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:18:14.866: INFO: Waiting for pod pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c to disappear Aug 12 12:18:14.877: INFO: Pod pod-e1f1b230-dc95-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:18:14.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9rmth" for this suite. Aug 12 12:18:20.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:18:20.935: INFO: namespace: e2e-tests-emptydir-9rmth, resource: bindings, ignored listing per whitelist Aug 12 12:18:20.978: INFO: namespace e2e-tests-emptydir-9rmth deletion completed in 6.098909953s • [SLOW TEST:12.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:18:20.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-tc6hc in namespace e2e-tests-proxy-vzs7l I0812 12:18:21.141411 6 runners.go:184] Created replication controller with name: proxy-service-tc6hc, namespace: e2e-tests-proxy-vzs7l, replica count: 1 I0812 12:18:22.191819 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0812 12:18:23.192064 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0812 12:18:24.192269 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0812 12:18:25.192496 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0812 12:18:26.192718 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0812 12:18:27.193103 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0812 12:18:28.193360 6 runners.go:184] proxy-service-tc6hc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 12 12:18:28.196: INFO: setup took 7.12349621s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 12 12:18:28.206: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-vzs7l/pods/proxy-service-tc6hc-rvp8q/proxy/:
>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 12 12:18:47.921: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f6e925cb-dc95-11ea-9b9c-0242ac11000c,GenerateName:,Namespace:e2e-tests-events-g5flz,SelfLink:/api/v1/namespaces/e2e-tests-events-g5flz/pods/send-events-f6e925cb-dc95-11ea-9b9c-0242ac11000c,UID:f6eae5c2-dc95-11ea-b2c9-0242ac120008,ResourceVersion:5905415,Generation:0,CreationTimestamp:2020-08-12 12:18:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 893401872,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zw5sj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zw5sj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-zw5sj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000aa7e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000aa7e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:18:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:18:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:18:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:18:43 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.211,StartTime:2020-08-12 12:18:43 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-12 12:18:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://2acdb70fc2515f8b7be02e9f76ffc58aa27e776cc049125f8206ae7916a01560}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 12 12:18:49.926: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 12 12:18:51.949: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:18:51.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-g5flz" for this suite. Aug 12 12:19:29.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:19:30.039: INFO: namespace: e2e-tests-events-g5flz, resource: bindings, ignored listing per whitelist Aug 12 12:19:30.067: INFO: namespace e2e-tests-events-g5flz deletion completed in 38.087736021s • [SLOW TEST:46.251 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:19:30.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:19:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-g5rzz" for this suite. Aug 12 12:19:40.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:19:40.207: INFO: namespace: e2e-tests-kubelet-test-g5rzz, resource: bindings, ignored listing per whitelist Aug 12 12:19:40.256: INFO: namespace e2e-tests-kubelet-test-g5rzz deletion completed in 6.081656393s • [SLOW TEST:10.188 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:19:40.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 12 12:19:40.417: INFO: Waiting up to 5m0s for pod "downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-h65s4" to be "success or failure" Aug 12 12:19:40.427: INFO: Pod "downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.842578ms Aug 12 12:19:42.431: INFO: Pod "downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014210249s Aug 12 12:19:44.435: INFO: Pod "downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018206145s STEP: Saw pod success Aug 12 12:19:44.435: INFO: Pod "downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:19:44.438: INFO: Trying to get logs from node hunter-worker2 pod downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 12:19:44.476: INFO: Waiting for pod downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:19:44.487: INFO: Pod downward-api-18976a4b-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:19:44.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h65s4" for this suite. Aug 12 12:19:50.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:19:50.517: INFO: namespace: e2e-tests-downward-api-h65s4, resource: bindings, ignored listing per whitelist Aug 12 12:19:50.575: INFO: namespace e2e-tests-downward-api-h65s4 deletion completed in 6.083825769s • [SLOW TEST:10.319 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:19:50.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-1eb554a0-dc96-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 12:19:50.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-6bsxj" to be "success or failure" Aug 12 12:19:50.742: INFO: Pod "pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.315985ms Aug 12 12:19:52.750: INFO: Pod "pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030358264s Aug 12 12:19:54.753: INFO: Pod "pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033905712s STEP: Saw pod success Aug 12 12:19:54.753: INFO: Pod "pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:19:54.756: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 12:19:54.778: INFO: Waiting for pod pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:19:54.796: INFO: Pod pod-configmaps-1eb7dd12-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:19:54.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6bsxj" for this suite. Aug 12 12:20:00.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:20:00.884: INFO: namespace: e2e-tests-configmap-6bsxj, resource: bindings, ignored listing per whitelist Aug 12 12:20:00.895: INFO: namespace e2e-tests-configmap-6bsxj deletion completed in 6.09537346s • [SLOW TEST:10.320 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:20:00.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Aug 12 12:20:00.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 12 12:20:03.514: INFO: stderr: "" Aug 12 12:20:03.515: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:20:03.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k89dx" for this suite. Aug 12 12:20:09.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:20:09.589: INFO: namespace: e2e-tests-kubectl-k89dx, resource: bindings, ignored listing per whitelist Aug 12 12:20:09.654: INFO: namespace e2e-tests-kubectl-k89dx deletion completed in 6.135630525s • [SLOW TEST:8.759 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:20:09.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 12 12:20:17.805: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:17.822: INFO: Pod pod-with-poststart-http-hook still exists Aug 12 12:20:19.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:19.826: INFO: Pod pod-with-poststart-http-hook still exists Aug 12 12:20:21.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:21.826: INFO: Pod pod-with-poststart-http-hook still exists Aug 12 12:20:23.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:23.825: INFO: Pod pod-with-poststart-http-hook still exists Aug 12 12:20:25.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:25.826: INFO: Pod pod-with-poststart-http-hook still exists Aug 12 12:20:27.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 12 12:20:27.826: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:20:27.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-7g65k" for this suite. Aug 12 12:20:49.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:20:49.852: INFO: namespace: e2e-tests-container-lifecycle-hook-7g65k, resource: bindings, ignored listing per whitelist Aug 12 12:20:49.924: INFO: namespace e2e-tests-container-lifecycle-hook-7g65k deletion completed in 22.095030327s • [SLOW TEST:40.271 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:20:49.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 12 12:20:50.090: INFO: Waiting up to 5m0s for pod "pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-m885f" to be "success or failure" Aug 12 12:20:50.122: INFO: Pod "pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.510313ms Aug 12 12:20:52.126: INFO: Pod "pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036840717s Aug 12 12:20:54.130: INFO: Pod "pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04047365s STEP: Saw pod success Aug 12 12:20:54.130: INFO: Pod "pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:20:54.133: INFO: Trying to get logs from node hunter-worker2 pod pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:20:54.165: INFO: Waiting for pod pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:20:54.178: INFO: Pod pod-421f5e8d-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:20:54.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m885f" for this suite. Aug 12 12:21:00.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:21:00.219: INFO: namespace: e2e-tests-emptydir-m885f, resource: bindings, ignored listing per whitelist Aug 12 12:21:00.259: INFO: namespace e2e-tests-emptydir-m885f deletion completed in 6.077920497s • [SLOW TEST:10.334 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:21:00.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:21:04.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tbjfw" for this suite. Aug 12 12:21:50.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:21:50.492: INFO: namespace: e2e-tests-kubelet-test-tbjfw, resource: bindings, ignored listing per whitelist Aug 12 12:21:50.577: INFO: namespace e2e-tests-kubelet-test-tbjfw deletion completed in 46.121084776s • [SLOW TEST:50.317 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:21:50.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:21:50.725: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 12 12:21:50.732: INFO: Number of nodes with available pods: 0 Aug 12 12:21:50.732: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 12 12:21:50.808: INFO: Number of nodes with available pods: 0 Aug 12 12:21:50.808: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:51.813: INFO: Number of nodes with available pods: 0 Aug 12 12:21:51.813: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:52.813: INFO: Number of nodes with available pods: 0 Aug 12 12:21:52.813: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:53.812: INFO: Number of nodes with available pods: 0 Aug 12 12:21:53.812: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:54.813: INFO: Number of nodes with available pods: 1 Aug 12 12:21:54.813: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 12 12:21:54.846: INFO: Number of nodes with available pods: 1 Aug 12 12:21:54.846: INFO: Number of running nodes: 0, number of available pods: 1 Aug 12 12:21:55.850: INFO: Number of nodes with available pods: 0 Aug 12 12:21:55.850: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 12 12:21:55.862: INFO: Number of nodes with available pods: 0 Aug 12 12:21:55.862: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:56.865: INFO: Number of nodes with available pods: 0 Aug 12 12:21:56.865: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:57.868: INFO: Number of nodes with available pods: 0 Aug 12 12:21:57.868: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:58.867: INFO: Number of nodes with available pods: 0 Aug 12 12:21:58.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:21:59.866: INFO: Number of nodes with available pods: 0 Aug 12 12:21:59.866: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:00.867: INFO: Number of nodes with available pods: 0 Aug 12 12:22:00.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:01.867: INFO: Number of nodes with available pods: 0 Aug 12 12:22:01.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:02.905: INFO: Number of nodes with available pods: 0 Aug 12 12:22:02.905: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:03.924: INFO: Number of nodes with available pods: 0 Aug 12 12:22:03.924: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:04.866: INFO: Number of nodes with available pods: 0 Aug 12 12:22:04.866: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:05.866: INFO: Number of nodes with available pods: 0 Aug 12 12:22:05.866: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:06.867: INFO: Number of nodes with available pods: 0 Aug 12 12:22:06.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:07.866: INFO: Number of nodes with available pods: 0 Aug 12 12:22:07.866: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:08.867: INFO: Number of nodes with available pods: 0 Aug 12 12:22:08.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:09.874: INFO: Number of nodes with available pods: 0 Aug 12 12:22:09.874: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:10.867: INFO: Number of nodes with available pods: 0 Aug 12 12:22:10.867: INFO: Node hunter-worker is running more than one daemon pod Aug 12 12:22:11.867: INFO: Number of nodes with available pods: 1 Aug 12 12:22:11.867: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pszvw, will wait for the garbage collector to delete the pods Aug 12 12:22:11.932: INFO: Deleting DaemonSet.extensions daemon-set took: 5.865674ms Aug 12 12:22:12.032: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.24827ms Aug 12 12:22:17.636: INFO: Number of nodes with available pods: 0 Aug 12 12:22:17.636: INFO: Number of running nodes: 0, number of available pods: 0 Aug 12 12:22:17.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pszvw/daemonsets","resourceVersion":"5906053"},"items":null} Aug 12 12:22:17.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pszvw/pods","resourceVersion":"5906053"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:22:17.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pszvw" for this suite. Aug 12 12:22:25.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:22:25.820: INFO: namespace: e2e-tests-daemonsets-pszvw, resource: bindings, ignored listing per whitelist Aug 12 12:22:25.850: INFO: namespace e2e-tests-daemonsets-pszvw deletion completed in 8.093145375s • [SLOW TEST:35.273 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:22:25.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-7b46dbd1-dc96-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:22:25.985: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-kxs4v" to be "success or failure" Aug 12 12:22:25.990: INFO: Pod "pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.900124ms Aug 12 12:22:27.995: INFO: Pod "pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009535657s Aug 12 12:22:29.997: INFO: Pod "pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012192747s STEP: Saw pod success Aug 12 12:22:29.997: INFO: Pod "pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:22:30.000: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 12 12:22:30.032: INFO: Waiting for pod pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:22:30.060: INFO: Pod pod-projected-secrets-7b4887b2-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:22:30.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kxs4v" for this suite. Aug 12 12:22:36.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:22:36.136: INFO: namespace: e2e-tests-projected-kxs4v, resource: bindings, ignored listing per whitelist Aug 12 12:22:36.170: INFO: namespace e2e-tests-projected-kxs4v deletion completed in 6.105766906s • [SLOW TEST:10.319 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:22:36.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-8174c125-dc96-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:22:36.426: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-l49b6" to be "success or failure" Aug 12 12:22:36.429: INFO: Pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782363ms Aug 12 12:22:38.474: INFO: Pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048430186s Aug 12 12:22:40.479: INFO: Pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.052915694s Aug 12 12:22:42.483: INFO: Pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057079906s STEP: Saw pod success Aug 12 12:22:42.483: INFO: Pod "pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:22:42.486: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 12 12:22:42.523: INFO: Waiting for pod pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:22:42.536: INFO: Pod pod-projected-secrets-81758747-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:22:42.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l49b6" for this suite. Aug 12 12:22:48.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:22:48.630: INFO: namespace: e2e-tests-projected-l49b6, resource: bindings, ignored listing per whitelist Aug 12 12:22:48.680: INFO: namespace e2e-tests-projected-l49b6 deletion completed in 6.140639263s • [SLOW TEST:12.510 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:22:48.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:22:48.792: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 12 12:22:53.876: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 12 12:22:53.876: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 12 12:22:53.906: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-f8x62,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8x62/deployments/test-cleanup-deployment,UID:8bea953c-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906211,Generation:1,CreationTimestamp:2020-08-12 12:22:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Aug 12 12:22:53.925: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Aug 12 12:22:53.925: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 12 12:22:53.925: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-f8x62,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f8x62/replicasets/test-cleanup-controller,UID:88df5b33-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906212,Generation:1,CreationTimestamp:2020-08-12 12:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8bea953c-dc96-11ea-b2c9-0242ac120008 0xc001c9a617 0xc001c9a618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 12 12:22:53.945: INFO: Pod "test-cleanup-controller-jvzgp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-jvzgp,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-f8x62,SelfLink:/api/v1/namespaces/e2e-tests-deployment-f8x62/pods/test-cleanup-controller-jvzgp,UID:88e252d8-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906204,Generation:0,CreationTimestamp:2020-08-12 12:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 88df5b33-dc96-11ea-b2c9-0242ac120008 0xc001fb1f87 0xc001fb1f88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nn7mj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nn7mj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nn7mj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002506000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002506080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:22:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:22:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:22:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:22:48 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.183,StartTime:2020-08-12 12:22:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:22:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c800b0305c776b3c72463beb1bac7b0f9a8aeec4c5be6af0f25fbf5e08954261}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:22:53.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-f8x62" for this suite. Aug 12 12:23:02.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:23:02.300: INFO: namespace: e2e-tests-deployment-f8x62, resource: bindings, ignored listing per whitelist Aug 12 12:23:02.316: INFO: namespace e2e-tests-deployment-f8x62 deletion completed in 8.223884382s • [SLOW TEST:13.636 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:23:02.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:23:02.608: INFO: Creating deployment "nginx-deployment" Aug 12 12:23:02.638: INFO: Waiting for observed generation 1 Aug 12 12:23:04.991: INFO: Waiting for all required pods to come up Aug 12 12:23:05.056: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 12 12:23:19.258: INFO: Waiting for deployment "nginx-deployment" to complete Aug 12 12:23:19.264: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 12 12:23:19.270: INFO: Updating deployment nginx-deployment Aug 12 12:23:19.270: INFO: Waiting for observed generation 2 Aug 12 12:23:21.392: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 12 12:23:21.436: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 12 12:23:21.685: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 12 12:23:21.692: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 12 12:23:21.692: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 12 12:23:21.694: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 12 12:23:21.699: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 12 12:23:21.699: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 12 12:23:21.704: INFO: Updating deployment nginx-deployment Aug 12 12:23:21.704: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 12 12:23:21.895: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 12 12:23:24.851: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 12 12:23:25.709: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-xk84s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk84s/deployments/nginx-deployment,UID:911e038f-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906544,Generation:3,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-12 12:23:19 +0000 UTC 2020-08-12 12:23:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-08-12 12:23:21 +0000 UTC 2020-08-12 12:23:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 12 12:23:26.121: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-xk84s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk84s/replicasets/nginx-deployment-5c98f8fb5,UID:9b0c74b5-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906538,Generation:3,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 911e038f-dc96-11ea-b2c9-0242ac120008 0xc002521857 0xc002521858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 12 12:23:26.121: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 12 12:23:26.121: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-xk84s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk84s/replicasets/nginx-deployment-85ddf47c5d,UID:9122b9a6-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906534,Generation:3,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 911e038f-dc96-11ea-b2c9-0242ac120008 0xc002521917 0xc002521918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 12 12:23:26.320: INFO: Pod "nginx-deployment-5c98f8fb5-4gpkr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4gpkr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-4gpkr,UID:9b0fcbe1-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906450,Generation:0,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc001f67927 0xc001f67928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f679a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f679c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.320: INFO: Pod "nginx-deployment-5c98f8fb5-ctxnr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ctxnr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-ctxnr,UID:9cf0c6ca-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906567,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc001f67a80 0xc001f67a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f67b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f67b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.320: INFO: Pod "nginx-deployment-5c98f8fb5-dcdlv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dcdlv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-dcdlv,UID:9b3e79ea-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906467,Generation:0,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc001f67c60 0xc001f67c61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f67ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f67d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.320: INFO: Pod "nginx-deployment-5c98f8fb5-hrcjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hrcjh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-hrcjh,UID:9c9d2c7c-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906563,Generation:0,CreationTimestamp:2020-08-12 12:23:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc001f67e20 0xc001f67e21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f67ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f67ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-kvkdt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kvkdt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-kvkdt,UID:9d2a867f-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906526,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc001f67f80 0xc001f67f81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ba050} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ba070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-kxvw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kxvw8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-kxvw8,UID:9d2a6054-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906525,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025ba187 0xc0025ba188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ba370} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ba390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-mx6sm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mx6sm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-mx6sm,UID:9b3bef7e-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906465,Generation:0,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025ba407 0xc0025ba408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ba660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ba680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-s5l2h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s5l2h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-s5l2h,UID:9cf0bf82-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906575,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025ba850 0xc0025ba851}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ba8d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ba8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-snx7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-snx7m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-snx7m,UID:9d2a8d27-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906528,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025bace0 0xc0025bace1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bad60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bad80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.321: INFO: Pod "nginx-deployment-5c98f8fb5-sr272" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sr272,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-sr272,UID:9b0feb01-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906447,Generation:0,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025bae77 0xc0025bae78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bb1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bb1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-5c98f8fb5-tjw96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tjw96,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-tjw96,UID:9b0cde0d-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906440,Generation:0,CreationTimestamp:2020-08-12 12:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025bb290 0xc0025bb291}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bb430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bb450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:19 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-5c98f8fb5-tznt7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tznt7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-tznt7,UID:9d2a7df4-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906524,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025bb580 0xc0025bb581}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bb780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bb7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-5c98f8fb5-zlljp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zlljp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-5c98f8fb5-zlljp,UID:9d2d14e0-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906533,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9b0c74b5-dc96-11ea-b2c9-0242ac120008 0xc0025bb817 0xc0025bb818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bb9d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bb9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-85ddf47c5d-2s7vs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2s7vs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-2s7vs,UID:9c9cfa69-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906553,Generation:0,CreationTimestamp:2020-08-12 12:23:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0025bbaf7 0xc0025bbaf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bbc50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bbc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-85ddf47c5d-75r8s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-75r8s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-75r8s,UID:9d2a2874-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906521,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0025bbd37 0xc0025bbd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bbe20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bbe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-85ddf47c5d-9chcz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9chcz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-9chcz,UID:9d2ab8b7-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906527,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0025bbec7 0xc0025bbec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025bbf40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025bbfd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.322: INFO: Pod "nginx-deployment-85ddf47c5d-9gcjq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9gcjq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-9gcjq,UID:9138c6c5-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906363,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894047 0xc002894048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028940c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028940e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:02 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.216,StartTime:2020-08-12 12:23:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a73872c87a71c7c8a33d3d8df317752c1c789752677902c1b0fa337b4a6ceaa9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-d8pxh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d8pxh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-d8pxh,UID:9c9cfcdb-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906541,Generation:0,CreationTimestamp:2020-08-12 12:23:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028941a7 0xc0028941a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-d8rhz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d8rhz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-d8rhz,UID:9d2a3391-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906520,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028942f7 0xc0028942f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894370} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-f79x8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f79x8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-f79x8,UID:9139458e-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906376,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894407 0xc002894408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028944a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:02 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.217,StartTime:2020-08-12 12:23:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://96ff74bd281c24ef62d3fb92b1dcb7a70ed8bcd6cdf0a8970883eafcd94e9199}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-flv7x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-flv7x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-flv7x,UID:91404c4e-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906405,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894567 0xc002894568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028945e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.218,StartTime:2020-08-12 12:23:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5aad1e25c8c15d75341b4c1bfbddb3855e501c05c85b7b852414aa79a1574194}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-gtsrt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gtsrt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-gtsrt,UID:9cf08e8a-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906516,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028946c7 0xc0028946c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894740} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-hsf26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hsf26,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-hsf26,UID:9cf0a4fd-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906514,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028947d7 0xc0028947d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894850} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-ldh2q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ldh2q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-ldh2q,UID:9140423b-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906385,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028948e7 0xc0028948e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.186,StartTime:2020-08-12 12:23:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e2b1862573b6491d066a2fe4098b81a90253f6802e1e968e4bcb846f72881fa9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-m5wcz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m5wcz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-m5wcz,UID:91405766-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906402,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894a47 0xc002894a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.219,StartTime:2020-08-12 12:23:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2e51342772f4ddb250a6d113a89edbbde8b9feee9e15a016040e419a5bd19e91}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.323: INFO: Pod "nginx-deployment-85ddf47c5d-mg2p6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mg2p6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-mg2p6,UID:91892cf2-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906408,Generation:0,CreationTimestamp:2020-08-12 12:23:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894ba7 0xc002894ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.220,StartTime:2020-08-12 12:23:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9f9d83bf6b3dbcaf9eb7ae4fe29d23546c0e3f7e28f6e7cec126a3f10c9b11d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-mm576" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mm576,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-mm576,UID:91394194-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906374,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894d07 0xc002894d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:02 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.185,StartTime:2020-08-12 12:23:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b72a76d258ab73dc532f1998a63474e7e60c7cf01200adcd468c32320fba1fd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-n6k54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6k54,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-n6k54,UID:9c852336-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906537,Generation:0,CreationTimestamp:2020-08-12 12:23:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894e67 0xc002894e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002894ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002894f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-08-12 12:23:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-phlwq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-phlwq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-phlwq,UID:9cf08671-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906510,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002894fb7 0xc002894fb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002895030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002895050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-s8jzm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s8jzm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-s8jzm,UID:9d2a3d53-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906518,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028950c7 0xc0028950c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002895140} {node.kubernetes.io/unreachable Exists NoExecute 0xc002895160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-tfv24" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tfv24,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-tfv24,UID:9d2a4f2f-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906519,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028951d7 0xc0028951d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002895250} {node.kubernetes.io/unreachable Exists NoExecute 0xc002895270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-wdlx5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdlx5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-wdlx5,UID:91405718-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906393,Generation:0,CreationTimestamp:2020-08-12 12:23:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc0028952e7 0xc0028952e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002895370} {node.kubernetes.io/unreachable Exists NoExecute 0xc002895390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:03 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.187,StartTime:2020-08-12 12:23:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-12 12:23:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4c89f32ae96dc19bf55aea25fbf52e9a3abd6ed85b488d05308f922713e3109a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 12 12:23:26.324: INFO: Pod "nginx-deployment-85ddf47c5d-x7np9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x7np9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk84s,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk84s/pods/nginx-deployment-85ddf47c5d-x7np9,UID:9cf07a53-dc96-11ea-b2c9-0242ac120008,ResourceVersion:5906555,Generation:0,CreationTimestamp:2020-08-12 12:23:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 9122b9a6-dc96-11ea-b2c9-0242ac120008 0xc002895457 0xc002895458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mbbgc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mbbgc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mbbgc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028954d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028954f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-12 12:23:22 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-12 12:23:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:23:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xk84s" for this suite. Aug 12 12:23:52.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:23:52.303: INFO: namespace: e2e-tests-deployment-xk84s, resource: bindings, ignored listing per whitelist Aug 12 12:23:52.353: INFO: namespace e2e-tests-deployment-xk84s deletion completed in 25.057490909s • [SLOW TEST:50.036 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:23:52.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Aug 12 12:23:52.515: INFO: Waiting up to 5m0s for pod "downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-px6kz" to be "success or failure" Aug 12 12:23:52.520: INFO: Pod "downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670306ms Aug 12 12:23:54.524: INFO: Pod "downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008929302s Aug 12 12:23:56.528: INFO: Pod "downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013201505s STEP: Saw pod success Aug 12 12:23:56.528: INFO: Pod "downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:23:56.531: INFO: Trying to get logs from node hunter-worker pod downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c container dapi-container: STEP: delete the pod Aug 12 12:23:56.591: INFO: Waiting for pod downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:23:56.598: INFO: Pod downward-api-aedb1cb8-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:23:56.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-px6kz" for this suite. Aug 12 12:24:02.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:24:02.675: INFO: namespace: e2e-tests-downward-api-px6kz, resource: bindings, ignored listing per whitelist Aug 12 12:24:02.691: INFO: namespace e2e-tests-downward-api-px6kz deletion completed in 6.089640578s • [SLOW TEST:10.338 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:24:02.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 12 12:24:02.823: INFO: Waiting up to 5m0s for pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-4xmx9" to be "success or failure" Aug 12 12:24:02.827: INFO: Pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632511ms Aug 12 12:24:04.895: INFO: Pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071882924s Aug 12 12:24:06.898: INFO: Pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.075493995s Aug 12 12:24:08.907: INFO: Pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083805588s STEP: Saw pod success Aug 12 12:24:08.907: INFO: Pod "pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:24:08.910: INFO: Trying to get logs from node hunter-worker2 pod pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:24:08.944: INFO: Waiting for pod pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:24:08.964: INFO: Pod pod-b4ff781a-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:24:08.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4xmx9" for this suite. Aug 12 12:24:14.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:24:15.039: INFO: namespace: e2e-tests-emptydir-4xmx9, resource: bindings, ignored listing per whitelist Aug 12 12:24:15.052: INFO: namespace e2e-tests-emptydir-4xmx9 deletion completed in 6.082809846s • [SLOW TEST:12.360 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:24:15.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:24:21.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-sshdh" for this suite. Aug 12 12:25:13.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:25:13.252: INFO: namespace: e2e-tests-kubelet-test-sshdh, resource: bindings, ignored listing per whitelist Aug 12 12:25:13.302: INFO: namespace e2e-tests-kubelet-test-sshdh deletion completed in 52.082728339s • [SLOW TEST:58.250 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:25:13.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 12:25:13.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-7s6rl" to be "success or failure" Aug 12 12:25:13.582: INFO: Pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.495848ms Aug 12 12:25:15.586: INFO: Pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011131451s Aug 12 12:25:17.590: INFO: Pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015382236s Aug 12 12:25:19.594: INFO: Pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019392471s STEP: Saw pod success Aug 12 12:25:19.594: INFO: Pod "downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:25:19.597: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 12:25:19.619: INFO: Waiting for pod downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c to disappear Aug 12 12:25:19.630: INFO: Pod downwardapi-volume-df2d0257-dc96-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:25:19.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7s6rl" for this suite. Aug 12 12:25:25.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:25:25.693: INFO: namespace: e2e-tests-projected-7s6rl, resource: bindings, ignored listing per whitelist Aug 12 12:25:25.735: INFO: namespace e2e-tests-projected-7s6rl deletion completed in 6.10176704s • [SLOW TEST:12.432 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:25:25.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e68165c8-dc96-11ea-9b9c-0242ac11000c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e68165c8-dc96-11ea-9b9c-0242ac11000c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:25:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nhbqn" for this suite. Aug 12 12:25:54.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:25:54.059: INFO: namespace: e2e-tests-projected-nhbqn, resource: bindings, ignored listing per whitelist Aug 12 12:25:54.104: INFO: namespace e2e-tests-projected-nhbqn deletion completed in 22.171934658s • [SLOW TEST:28.369 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:25:54.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Aug 12 12:25:54.532: INFO: namespace e2e-tests-kubectl-h5hzs Aug 12 12:25:54.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h5hzs' Aug 12 12:25:54.875: INFO: stderr: "" Aug 12 12:25:54.875: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 12 12:25:55.878: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:25:55.878: INFO: Found 0 / 1 Aug 12 12:25:56.882: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:25:56.882: INFO: Found 0 / 1 Aug 12 12:25:57.878: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:25:57.878: INFO: Found 0 / 1 Aug 12 12:25:58.879: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:25:58.879: INFO: Found 1 / 1 Aug 12 12:25:58.879: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 12 12:25:58.882: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:25:58.882: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 12 12:25:58.882: INFO: wait on redis-master startup in e2e-tests-kubectl-h5hzs Aug 12 12:25:58.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-c647s redis-master --namespace=e2e-tests-kubectl-h5hzs' Aug 12 12:25:58.992: INFO: stderr: "" Aug 12 12:25:58.992: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 Aug 12:25:57.723 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Aug 12:25:57.723 # Server started, Redis version 3.2.12\n1:M 12 Aug 12:25:57.723 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Aug 12:25:57.723 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Aug 12 12:25:58.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-h5hzs' Aug 12 12:25:59.155: INFO: stderr: "" Aug 12 12:25:59.155: INFO: stdout: "service/rm2 exposed\n" Aug 12 12:25:59.190: INFO: Service rm2 in namespace e2e-tests-kubectl-h5hzs found. STEP: exposing service Aug 12 12:26:01.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-h5hzs' Aug 12 12:26:01.385: INFO: stderr: "" Aug 12 12:26:01.385: INFO: stdout: "service/rm3 exposed\n" Aug 12 12:26:01.388: INFO: Service rm3 in namespace e2e-tests-kubectl-h5hzs found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:26:03.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h5hzs" for this suite. Aug 12 12:26:27.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:26:27.469: INFO: namespace: e2e-tests-kubectl-h5hzs, resource: bindings, ignored listing per whitelist Aug 12 12:26:27.482: INFO: namespace e2e-tests-kubectl-h5hzs deletion completed in 24.084037427s • [SLOW TEST:33.377 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:26:27.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-6djc2/secret-test-0b5982ff-dc97-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:26:27.992: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-6djc2" to be "success or failure" Aug 12 12:26:28.038: INFO: Pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 46.331543ms Aug 12 12:26:30.149: INFO: Pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157222334s Aug 12 12:26:32.153: INFO: Pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160946304s Aug 12 12:26:34.157: INFO: Pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165410213s STEP: Saw pod success Aug 12 12:26:34.157: INFO: Pod "pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:26:34.160: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c container env-test: STEP: delete the pod Aug 12 12:26:34.191: INFO: Waiting for pod pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:26:34.195: INFO: Pod pod-configmaps-0b877079-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:26:34.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6djc2" for this suite. Aug 12 12:26:40.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:26:40.273: INFO: namespace: e2e-tests-secrets-6djc2, resource: bindings, ignored listing per whitelist Aug 12 12:26:40.281: INFO: namespace e2e-tests-secrets-6djc2 deletion completed in 6.082537386s • [SLOW TEST:12.799 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:26:40.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 12 12:26:40.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-nql89" to be "success or failure" Aug 12 12:26:40.394: INFO: Pod "downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.317223ms Aug 12 12:26:42.706: INFO: Pod "downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322396746s Aug 12 12:26:44.735: INFO: Pod "downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.351279453s STEP: Saw pod success Aug 12 12:26:44.735: INFO: Pod "downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:26:44.738: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c container client-container: STEP: delete the pod Aug 12 12:26:44.858: INFO: Waiting for pod downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:26:44.938: INFO: Pod downwardapi-volume-12eae1c5-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:26:44.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nql89" for this suite. Aug 12 12:26:51.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:26:51.100: INFO: namespace: e2e-tests-downward-api-nql89, resource: bindings, ignored listing per whitelist Aug 12 12:26:51.126: INFO: namespace e2e-tests-downward-api-nql89 deletion completed in 6.184347984s • [SLOW TEST:10.845 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:26:51.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 12 12:26:58.585: INFO: Successfully updated pod "annotationupdate199ebf39-dc97-11ea-9b9c-0242ac11000c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:27:00.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hsdf6" for this suite. Aug 12 12:27:22.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:27:22.737: INFO: namespace: e2e-tests-downward-api-hsdf6, resource: bindings, ignored listing per whitelist Aug 12 12:27:22.793: INFO: namespace e2e-tests-downward-api-hsdf6 deletion completed in 22.157207215s • [SLOW TEST:31.667 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:27:22.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Aug 12 12:27:29.001: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:27:53.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-jjcgl" for this suite. Aug 12 12:27:59.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:27:59.240: INFO: namespace: e2e-tests-namespaces-jjcgl, resource: bindings, ignored listing per whitelist Aug 12 12:27:59.254: INFO: namespace e2e-tests-namespaces-jjcgl deletion completed in 6.111483395s STEP: Destroying namespace "e2e-tests-nsdeletetest-fhbs2" for this suite. Aug 12 12:27:59.256: INFO: Namespace e2e-tests-nsdeletetest-fhbs2 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-rdnwl" for this suite. Aug 12 12:28:05.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:28:05.322: INFO: namespace: e2e-tests-nsdeletetest-rdnwl, resource: bindings, ignored listing per whitelist Aug 12 12:28:05.383: INFO: namespace e2e-tests-nsdeletetest-rdnwl deletion completed in 6.127402268s • [SLOW TEST:42.590 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:28:05.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-45a78c3b-dc97-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:28:05.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-xsl4h" to be "success or failure" Aug 12 12:28:05.521: INFO: Pod "pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.309122ms Aug 12 12:28:07.525: INFO: Pod "pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009764591s Aug 12 12:28:09.575: INFO: Pod "pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059805681s STEP: Saw pod success Aug 12 12:28:09.575: INFO: Pod "pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:28:09.578: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c container projected-secret-volume-test: STEP: delete the pod Aug 12 12:28:09.734: INFO: Waiting for pod pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:28:09.774: INFO: Pod pod-projected-secrets-45a8df86-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:28:09.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xsl4h" for this suite. Aug 12 12:28:15.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:28:15.825: INFO: namespace: e2e-tests-projected-xsl4h, resource: bindings, ignored listing per whitelist Aug 12 12:28:15.890: INFO: namespace e2e-tests-projected-xsl4h deletion completed in 6.112303718s • [SLOW TEST:10.506 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:28:15.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Aug 12 12:28:16.002: INFO: Waiting up to 5m0s for pod "client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-containers-7bl4q" to be "success or failure" Aug 12 12:28:16.025: INFO: Pod "client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.90064ms Aug 12 12:28:18.029: INFO: Pod "client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026802148s Aug 12 12:28:20.033: INFO: Pod "client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030889038s STEP: Saw pod success Aug 12 12:28:20.033: INFO: Pod "client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:28:20.036: INFO: Trying to get logs from node hunter-worker pod client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:28:20.070: INFO: Waiting for pod client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:28:20.091: INFO: Pod client-containers-4be76cab-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:28:20.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7bl4q" for this suite. Aug 12 12:28:28.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:28:28.126: INFO: namespace: e2e-tests-containers-7bl4q, resource: bindings, ignored listing per whitelist Aug 12 12:28:28.220: INFO: namespace e2e-tests-containers-7bl4q deletion completed in 8.125444731s • [SLOW TEST:12.331 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:28:28.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 12 12:28:28.345: INFO: Waiting up to 5m0s for pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-bzwfv" to be "success or failure" Aug 12 12:28:28.365: INFO: Pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.342412ms Aug 12 12:28:30.469: INFO: Pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123321976s Aug 12 12:28:32.508: INFO: Pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.162481941s Aug 12 12:28:34.511: INFO: Pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165275242s STEP: Saw pod success Aug 12 12:28:34.511: INFO: Pod "pod-53448f15-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:28:34.513: INFO: Trying to get logs from node hunter-worker pod pod-53448f15-dc97-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:28:34.638: INFO: Waiting for pod pod-53448f15-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:28:34.671: INFO: Pod pod-53448f15-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:28:34.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bzwfv" for this suite. Aug 12 12:28:40.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:28:40.733: INFO: namespace: e2e-tests-emptydir-bzwfv, resource: bindings, ignored listing per whitelist Aug 12 12:28:40.764: INFO: namespace e2e-tests-emptydir-bzwfv deletion completed in 6.089257732s • [SLOW TEST:12.544 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:28:40.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 12 12:28:47.523: INFO: Successfully updated pod "labelsupdate5ac25e1a-dc97-11ea-9b9c-0242ac11000c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:28:49.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5vb9w" for this suite. Aug 12 12:29:11.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:29:11.619: INFO: namespace: e2e-tests-projected-5vb9w, resource: bindings, ignored listing per whitelist Aug 12 12:29:11.658: INFO: namespace e2e-tests-projected-5vb9w deletion completed in 22.108100123s • [SLOW TEST:30.894 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:29:11.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6d3baef9-dc97-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume secrets Aug 12 12:29:11.997: INFO: Waiting up to 5m0s for pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-secrets-lp4l7" to be "success or failure" Aug 12 12:29:12.079: INFO: Pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 81.356865ms Aug 12 12:29:14.116: INFO: Pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118847527s Aug 12 12:29:16.120: INFO: Pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122549562s Aug 12 12:29:18.124: INFO: Pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126055239s STEP: Saw pod success Aug 12 12:29:18.124: INFO: Pod "pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:29:18.126: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c container secret-env-test: STEP: delete the pod Aug 12 12:29:18.152: INFO: Waiting for pod pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:29:18.157: INFO: Pod pod-secrets-6d458d41-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:29:18.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lp4l7" for this suite. Aug 12 12:29:24.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:29:24.252: INFO: namespace: e2e-tests-secrets-lp4l7, resource: bindings, ignored listing per whitelist Aug 12 12:29:24.268: INFO: namespace e2e-tests-secrets-lp4l7 deletion completed in 6.107964571s • [SLOW TEST:12.609 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:29:24.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n7wsm STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 12 12:29:24.429: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 12 12:29:42.540: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.215:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n7wsm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 12:29:42.540: INFO: >>> kubeConfig: /root/.kube/config I0812 12:29:42.564303 6 log.go:172] (0xc0003b7d90) (0xc002000140) Create stream I0812 12:29:42.564333 6 log.go:172] (0xc0003b7d90) (0xc002000140) Stream added, broadcasting: 1 I0812 12:29:42.565817 6 log.go:172] (0xc0003b7d90) Reply frame received for 1 I0812 12:29:42.565869 6 log.go:172] (0xc0003b7d90) (0xc0020001e0) Create stream I0812 12:29:42.565885 6 log.go:172] (0xc0003b7d90) (0xc0020001e0) Stream added, broadcasting: 3 I0812 12:29:42.566541 6 log.go:172] (0xc0003b7d90) Reply frame received for 3 I0812 12:29:42.566573 6 log.go:172] (0xc0003b7d90) (0xc002000280) Create stream I0812 12:29:42.566582 6 log.go:172] (0xc0003b7d90) (0xc002000280) Stream added, broadcasting: 5 I0812 12:29:42.567277 6 log.go:172] (0xc0003b7d90) Reply frame received for 5 I0812 12:29:42.655918 6 log.go:172] (0xc0003b7d90) Data frame received for 3 I0812 12:29:42.655956 6 log.go:172] (0xc0020001e0) (3) Data frame handling I0812 12:29:42.655981 6 log.go:172] (0xc0020001e0) (3) Data frame sent I0812 12:29:42.656027 6 log.go:172] (0xc0003b7d90) Data frame received for 3 I0812 12:29:42.656065 6 log.go:172] (0xc0020001e0) (3) Data frame handling I0812 12:29:42.656106 6 log.go:172] (0xc0003b7d90) Data frame received for 5 I0812 12:29:42.656151 6 log.go:172] (0xc002000280) (5) Data frame handling I0812 12:29:42.658022 6 log.go:172] (0xc0003b7d90) Data frame received for 1 I0812 12:29:42.658041 6 log.go:172] (0xc002000140) (1) Data frame handling I0812 12:29:42.658049 6 log.go:172] (0xc002000140) (1) Data frame sent I0812 12:29:42.658071 6 log.go:172] (0xc0003b7d90) (0xc002000140) Stream removed, broadcasting: 1 I0812 12:29:42.658215 6 log.go:172] (0xc0003b7d90) (0xc002000140) Stream removed, broadcasting: 1 I0812 12:29:42.658236 6 log.go:172] (0xc0003b7d90) (0xc0020001e0) Stream removed, broadcasting: 3 I0812 12:29:42.658248 6 log.go:172] (0xc0003b7d90) (0xc002000280) Stream removed, broadcasting: 5 Aug 12 12:29:42.658: INFO: Found all expected endpoints: [netserver-0] I0812 12:29:42.658546 6 log.go:172] (0xc0003b7d90) Go away received Aug 12 12:29:42.665: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.237:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n7wsm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 12 12:29:42.665: INFO: >>> kubeConfig: /root/.kube/config I0812 12:29:42.693990 6 log.go:172] (0xc000aec630) (0xc002692a00) Create stream I0812 12:29:42.694021 6 log.go:172] (0xc000aec630) (0xc002692a00) Stream added, broadcasting: 1 I0812 12:29:42.696122 6 log.go:172] (0xc000aec630) Reply frame received for 1 I0812 12:29:42.696155 6 log.go:172] (0xc000aec630) (0xc002692aa0) Create stream I0812 12:29:42.696164 6 log.go:172] (0xc000aec630) (0xc002692aa0) Stream added, broadcasting: 3 I0812 12:29:42.697294 6 log.go:172] (0xc000aec630) Reply frame received for 3 I0812 12:29:42.697326 6 log.go:172] (0xc000aec630) (0xc001904aa0) Create stream I0812 12:29:42.697336 6 log.go:172] (0xc000aec630) (0xc001904aa0) Stream added, broadcasting: 5 I0812 12:29:42.698128 6 log.go:172] (0xc000aec630) Reply frame received for 5 I0812 12:29:42.769476 6 log.go:172] (0xc000aec630) Data frame received for 3 I0812 12:29:42.769510 6 log.go:172] (0xc002692aa0) (3) Data frame handling I0812 12:29:42.769530 6 log.go:172] (0xc002692aa0) (3) Data frame sent I0812 12:29:42.769541 6 log.go:172] (0xc000aec630) Data frame received for 3 I0812 12:29:42.769552 6 log.go:172] (0xc002692aa0) (3) Data frame handling I0812 12:29:42.769733 6 log.go:172] (0xc000aec630) Data frame received for 5 I0812 12:29:42.769765 6 log.go:172] (0xc001904aa0) (5) Data frame handling I0812 12:29:42.771063 6 log.go:172] (0xc000aec630) Data frame received for 1 I0812 12:29:42.771093 6 log.go:172] (0xc002692a00) (1) Data frame handling I0812 12:29:42.771109 6 log.go:172] (0xc002692a00) (1) Data frame sent I0812 12:29:42.771126 6 log.go:172] (0xc000aec630) (0xc002692a00) Stream removed, broadcasting: 1 I0812 12:29:42.771144 6 log.go:172] (0xc000aec630) Go away received I0812 12:29:42.771249 6 log.go:172] (0xc000aec630) (0xc002692a00) Stream removed, broadcasting: 1 I0812 12:29:42.771275 6 log.go:172] (0xc000aec630) (0xc002692aa0) Stream removed, broadcasting: 3 I0812 12:29:42.771299 6 log.go:172] (0xc000aec630) (0xc001904aa0) Stream removed, broadcasting: 5 Aug 12 12:29:42.771: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:29:42.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-n7wsm" for this suite. Aug 12 12:30:06.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:30:06.888: INFO: namespace: e2e-tests-pod-network-test-n7wsm, resource: bindings, ignored listing per whitelist Aug 12 12:30:06.904: INFO: namespace e2e-tests-pod-network-test-n7wsm deletion completed in 24.128558788s • [SLOW TEST:42.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:30:06.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 12 12:30:07.095: INFO: Waiting up to 5m0s for pod "pod-8e20808e-dc97-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-bt2hl" to be "success or failure" Aug 12 12:30:07.112: INFO: Pod "pod-8e20808e-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.461629ms Aug 12 12:30:09.116: INFO: Pod "pod-8e20808e-dc97-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021162314s Aug 12 12:30:11.120: INFO: Pod "pod-8e20808e-dc97-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025875328s STEP: Saw pod success Aug 12 12:30:11.120: INFO: Pod "pod-8e20808e-dc97-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:30:11.124: INFO: Trying to get logs from node hunter-worker2 pod pod-8e20808e-dc97-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:30:11.159: INFO: Waiting for pod pod-8e20808e-dc97-11ea-9b9c-0242ac11000c to disappear Aug 12 12:30:11.164: INFO: Pod pod-8e20808e-dc97-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:30:11.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bt2hl" for this suite. Aug 12 12:30:17.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:30:17.232: INFO: namespace: e2e-tests-emptydir-bt2hl, resource: bindings, ignored listing per whitelist Aug 12 12:30:17.250: INFO: namespace e2e-tests-emptydir-bt2hl deletion completed in 6.083103147s • [SLOW TEST:10.346 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:30:17.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 12:30:17.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:20.206: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 12 12:30:20.206: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Aug 12 12:30:20.259: INFO: scanned /root for discovery docs: Aug 12 12:30:20.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:36.257: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 12 12:30:36.257: INFO: stdout: "Created e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37\nScaling up e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 12 12:30:36.257: INFO: stdout: "Created e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37\nScaling up e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 12 12:30:36.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:36.455: INFO: stderr: "" Aug 12 12:30:36.455: INFO: stdout: "e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37-rzz5n e2e-test-nginx-rc-tmwh6 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Aug 12 12:30:41.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:41.570: INFO: stderr: "" Aug 12 12:30:41.570: INFO: stdout: "e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37-rzz5n " Aug 12 12:30:41.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37-rzz5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:41.724: INFO: stderr: "" Aug 12 12:30:41.725: INFO: stdout: "true" Aug 12 12:30:41.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37-rzz5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:41.824: INFO: stderr: "" Aug 12 12:30:41.824: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 12 12:30:41.824: INFO: e2e-test-nginx-rc-89a0b3ca75d57ac7083dcee40e028f37-rzz5n is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Aug 12 12:30:41.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-hqjmz' Aug 12 12:30:41.941: INFO: stderr: "" Aug 12 12:30:41.941: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:30:41.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hqjmz" for this suite. Aug 12 12:30:49.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:30:50.003: INFO: namespace: e2e-tests-kubectl-hqjmz, resource: bindings, ignored listing per whitelist Aug 12 12:30:50.034: INFO: namespace e2e-tests-kubectl-hqjmz deletion completed in 8.089596753s • [SLOW TEST:32.783 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:30:50.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0812 12:31:20.778950 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 12 12:31:20.779: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:31:20.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-btpqf" for this suite. Aug 12 12:31:27.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:31:27.100: INFO: namespace: e2e-tests-gc-btpqf, resource: bindings, ignored listing per whitelist Aug 12 12:31:27.125: INFO: namespace e2e-tests-gc-btpqf deletion completed in 6.343387809s • [SLOW TEST:37.091 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:31:27.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:31:27.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Aug 12 12:31:27.323: INFO: stderr: "" Aug 12 12:31:27.323: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-11T21:49:24Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Aug 12 12:31:27.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9xndv' Aug 12 12:31:27.571: INFO: stderr: "" Aug 12 12:31:27.571: INFO: stdout: "replicationcontroller/redis-master created\n" Aug 12 12:31:27.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9xndv' Aug 12 12:31:27.863: INFO: stderr: "" Aug 12 12:31:27.863: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Aug 12 12:31:28.933: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:31:28.933: INFO: Found 0 / 1 Aug 12 12:31:29.867: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:31:29.867: INFO: Found 0 / 1 Aug 12 12:31:30.876: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:31:30.876: INFO: Found 0 / 1 Aug 12 12:31:31.891: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:31:31.891: INFO: Found 1 / 1 Aug 12 12:31:31.891: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 12 12:31:31.894: INFO: Selector matched 1 pods for map[app:redis] Aug 12 12:31:31.894: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 12 12:31:31.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-ngzn8 --namespace=e2e-tests-kubectl-9xndv' Aug 12 12:31:32.035: INFO: stderr: "" Aug 12 12:31:32.035: INFO: stdout: "Name: redis-master-ngzn8\nNamespace: e2e-tests-kubectl-9xndv\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.18.0.2\nStart Time: Wed, 12 Aug 2020 12:31:27 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.241\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://f7617894d18b59f076a2c4e898225a014a6c2a7d1221f2b0b4f5ce9650c0245d\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 12 Aug 2020 12:31:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9lrxs (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9lrxs:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9lrxs\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-9xndv/redis-master-ngzn8 to hunter-worker2\n Normal Pulled 3s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Aug 12 12:31:32.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-9xndv' Aug 12 12:31:32.148: INFO: stderr: "" Aug 12 12:31:32.148: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9xndv\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-ngzn8\n" Aug 12 12:31:32.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-9xndv' Aug 12 12:31:32.271: INFO: stderr: "" Aug 12 12:31:32.271: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9xndv\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.109.9.4\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.241:6379\nSession Affinity: None\nEvents: \n" Aug 12 12:31:32.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Aug 12 12:31:32.879: INFO: stderr: "" Aug 12 12:31:32.879: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:22:18 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 12 Aug 2020 12:31:25 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 12 Aug 2020 12:31:25 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 12 Aug 2020 12:31:25 +0000 Fri, 10 Jul 2020 10:22:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 12 Aug 2020 12:31:25 +0000 Fri, 10 Jul 2020 10:23:08 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.8\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 86b921187bcd42a69301f53c2d21b8f0\n System UUID: dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-46fs4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 33d\n kube-system coredns-54ff9cd656-gzt7d 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 33d\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kindnet-r4bfs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 33d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-proxy-4jv56 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 33d\n local-path-storage local-path-provisioner-674595c7-jw5rw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 33d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 12 12:31:32.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-9xndv' Aug 12 12:31:33.072: INFO: stderr: "" Aug 12 12:31:33.072: INFO: stdout: "Name: e2e-tests-kubectl-9xndv\nLabels: e2e-framework=kubectl\n e2e-run=2864296a-dc89-11ea-9b9c-0242ac11000c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:31:33.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9xndv" for this suite. Aug 12 12:31:55.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:31:55.208: INFO: namespace: e2e-tests-kubectl-9xndv, resource: bindings, ignored listing per whitelist Aug 12 12:31:55.212: INFO: namespace e2e-tests-kubectl-9xndv deletion completed in 22.136623147s • [SLOW TEST:28.087 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:31:55.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:31:59.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-92dhf" for this suite. Aug 12 12:32:05.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:32:05.528: INFO: namespace: e2e-tests-emptydir-wrapper-92dhf, resource: bindings, ignored listing per whitelist Aug 12 12:32:05.560: INFO: namespace e2e-tests-emptydir-wrapper-92dhf deletion completed in 6.105483914s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:32:05.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0812 12:32:06.714759 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 12 12:32:06.714: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:32:06.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-j2wfw" for this suite. Aug 12 12:32:12.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:32:12.783: INFO: namespace: e2e-tests-gc-j2wfw, resource: bindings, ignored listing per whitelist Aug 12 12:32:12.805: INFO: namespace e2e-tests-gc-j2wfw deletion completed in 6.087367421s • [SLOW TEST:7.245 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:32:12.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 12 12:32:13.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jn4sr' Aug 12 12:32:13.121: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 12 12:32:13.121: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 12 12:32:15.195: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6lbhk] Aug 12 12:32:15.195: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6lbhk" in namespace "e2e-tests-kubectl-jn4sr" to be "running and ready" Aug 12 12:32:15.210: INFO: Pod "e2e-test-nginx-rc-6lbhk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.73784ms Aug 12 12:32:17.216: INFO: Pod "e2e-test-nginx-rc-6lbhk": Phase="Running", Reason="", readiness=true. Elapsed: 2.020534079s Aug 12 12:32:17.216: INFO: Pod "e2e-test-nginx-rc-6lbhk" satisfied condition "running and ready" Aug 12 12:32:17.216: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6lbhk] Aug 12 12:32:17.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jn4sr' Aug 12 12:32:17.349: INFO: stderr: "" Aug 12 12:32:17.349: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Aug 12 12:32:17.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jn4sr' Aug 12 12:32:17.460: INFO: stderr: "" Aug 12 12:32:17.460: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:32:17.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jn4sr" for this suite. Aug 12 12:32:39.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:32:39.574: INFO: namespace: e2e-tests-kubectl-jn4sr, resource: bindings, ignored listing per whitelist Aug 12 12:32:39.596: INFO: namespace e2e-tests-kubectl-jn4sr deletion completed in 22.132155714s • [SLOW TEST:26.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:32:39.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Aug 12 12:32:39.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:40.048: INFO: stderr: "" Aug 12 12:32:40.048: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 12:32:40.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:40.182: INFO: stderr: "" Aug 12 12:32:40.182: INFO: stdout: "update-demo-nautilus-bzfxp update-demo-nautilus-sl77k " Aug 12 12:32:40.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzfxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:40.369: INFO: stderr: "" Aug 12 12:32:40.369: INFO: stdout: "" Aug 12 12:32:40.369: INFO: update-demo-nautilus-bzfxp is created but not running Aug 12 12:32:45.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:45.593: INFO: stderr: "" Aug 12 12:32:45.593: INFO: stdout: "update-demo-nautilus-bzfxp update-demo-nautilus-sl77k " Aug 12 12:32:45.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzfxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:45.883: INFO: stderr: "" Aug 12 12:32:45.883: INFO: stdout: "true" Aug 12 12:32:45.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bzfxp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:46.139: INFO: stderr: "" Aug 12 12:32:46.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 12:32:46.139: INFO: validating pod update-demo-nautilus-bzfxp Aug 12 12:32:46.144: INFO: got data: { "image": "nautilus.jpg" } Aug 12 12:32:46.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 12:32:46.144: INFO: update-demo-nautilus-bzfxp is verified up and running Aug 12 12:32:46.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sl77k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:46.242: INFO: stderr: "" Aug 12 12:32:46.242: INFO: stdout: "true" Aug 12 12:32:46.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sl77k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:32:46.348: INFO: stderr: "" Aug 12 12:32:46.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 12 12:32:46.348: INFO: validating pod update-demo-nautilus-sl77k Aug 12 12:32:46.352: INFO: got data: { "image": "nautilus.jpg" } Aug 12 12:32:46.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 12 12:32:46.352: INFO: update-demo-nautilus-sl77k is verified up and running STEP: rolling-update to new replication controller Aug 12 12:32:46.354: INFO: scanned /root for discovery docs: Aug 12 12:32:46.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:10.860: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 12 12:33:10.860: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 12 12:33:10.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:10.967: INFO: stderr: "" Aug 12 12:33:10.967: INFO: stdout: "update-demo-kitten-fmmcl update-demo-kitten-gzl24 " Aug 12 12:33:10.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fmmcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:11.055: INFO: stderr: "" Aug 12 12:33:11.055: INFO: stdout: "true" Aug 12 12:33:11.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fmmcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:11.161: INFO: stderr: "" Aug 12 12:33:11.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 12 12:33:11.161: INFO: validating pod update-demo-kitten-fmmcl Aug 12 12:33:11.192: INFO: got data: { "image": "kitten.jpg" } Aug 12 12:33:11.192: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 12 12:33:11.192: INFO: update-demo-kitten-fmmcl is verified up and running Aug 12 12:33:11.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gzl24 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:11.291: INFO: stderr: "" Aug 12 12:33:11.291: INFO: stdout: "true" Aug 12 12:33:11.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gzl24 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2qszp' Aug 12 12:33:11.382: INFO: stderr: "" Aug 12 12:33:11.382: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 12 12:33:11.382: INFO: validating pod update-demo-kitten-gzl24 Aug 12 12:33:11.386: INFO: got data: { "image": "kitten.jpg" } Aug 12 12:33:11.386: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 12 12:33:11.386: INFO: update-demo-kitten-gzl24 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:33:11.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2qszp" for this suite. Aug 12 12:33:33.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:33:33.710: INFO: namespace: e2e-tests-kubectl-2qszp, resource: bindings, ignored listing per whitelist Aug 12 12:33:33.756: INFO: namespace e2e-tests-kubectl-2qszp deletion completed in 22.367199046s • [SLOW TEST:54.160 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:33:33.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-099d1edf-dc98-11ea-9b9c-0242ac11000c STEP: Creating a pod to test consume configMaps Aug 12 12:33:34.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-configmap-c5gf7" to be "success or failure" Aug 12 12:33:34.457: INFO: Pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 136.158753ms Aug 12 12:33:36.719: INFO: Pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398249192s Aug 12 12:33:38.723: INFO: Pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c": Phase="Running", Reason="", readiness=true. Elapsed: 4.402351918s Aug 12 12:33:40.727: INFO: Pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.406537134s STEP: Saw pod success Aug 12 12:33:40.727: INFO: Pod "pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:33:40.730: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c container configmap-volume-test: STEP: delete the pod Aug 12 12:33:40.751: INFO: Waiting for pod pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c to disappear Aug 12 12:33:40.755: INFO: Pod pod-configmaps-099ec2fb-dc98-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:33:40.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-c5gf7" for this suite. Aug 12 12:33:46.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:33:46.954: INFO: namespace: e2e-tests-configmap-c5gf7, resource: bindings, ignored listing per whitelist Aug 12 12:33:47.000: INFO: namespace e2e-tests-configmap-c5gf7 deletion completed in 6.242340487s • [SLOW TEST:13.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:33:47.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 12 12:33:48.487: INFO: Waiting up to 5m0s for pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-qpnv5" to be "success or failure" Aug 12 12:33:48.490: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095041ms Aug 12 12:33:51.067: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.58062029s Aug 12 12:33:53.071: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58460415s Aug 12 12:33:55.075: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588850046s Aug 12 12:33:57.174: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687743114s Aug 12 12:33:59.522: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.035746326s STEP: Saw pod success Aug 12 12:33:59.522: INFO: Pod "pod-12114c51-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure" Aug 12 12:33:59.525: INFO: Trying to get logs from node hunter-worker2 pod pod-12114c51-dc98-11ea-9b9c-0242ac11000c container test-container: STEP: delete the pod Aug 12 12:33:59.551: INFO: Waiting for pod pod-12114c51-dc98-11ea-9b9c-0242ac11000c to disappear Aug 12 12:33:59.586: INFO: Pod pod-12114c51-dc98-11ea-9b9c-0242ac11000c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 12 12:33:59.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qpnv5" for this suite. Aug 12 12:34:06.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 12 12:34:06.096: INFO: namespace: e2e-tests-emptydir-qpnv5, resource: bindings, ignored listing per whitelist Aug 12 12:34:06.138: INFO: namespace e2e-tests-emptydir-qpnv5 deletion completed in 6.548935685s • [SLOW TEST:19.138 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 12 12:34:06.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 12 12:34:06.283: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 12 12:34:12.580: INFO: Creating ReplicaSet my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c
Aug 12 12:34:12.630: INFO: Pod name my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c: Found 0 pods out of 1
Aug 12 12:34:17.634: INFO: Pod name my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c: Found 1 pods out of 1
Aug 12 12:34:17.634: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c" is running
Aug 12 12:34:17.637: INFO: Pod "my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c-cp6kb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 12:34:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 12:34:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 12:34:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-12 12:34:12 +0000 UTC Reason: Message:}])
Aug 12 12:34:17.637: INFO: Trying to dial the pod
Aug 12 12:34:22.818: INFO: Controller my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c: Got expected result from replica 1 [my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c-cp6kb]: "my-hostname-basic-20738451-dc98-11ea-9b9c-0242ac11000c-cp6kb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:34:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-x6pvg" for this suite.
Aug 12 12:34:28.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:34:28.906: INFO: namespace: e2e-tests-replicaset-x6pvg, resource: bindings, ignored listing per whitelist
Aug 12 12:34:28.947: INFO: namespace e2e-tests-replicaset-x6pvg deletion completed in 6.125839815s

• [SLOW TEST:16.524 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:34:28.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 12 12:34:29.115: INFO: Waiting up to 5m0s for pod "pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-bfrqg" to be "success or failure"
Aug 12 12:34:29.134: INFO: Pod "pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.512955ms
Aug 12 12:34:31.162: INFO: Pod "pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047176451s
Aug 12 12:34:33.181: INFO: Pod "pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065850937s
STEP: Saw pod success
Aug 12 12:34:33.181: INFO: Pod "pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:34:33.184: INFO: Trying to get logs from node hunter-worker pod pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c container test-container: 
STEP: delete the pod
Aug 12 12:34:33.202: INFO: Waiting for pod pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:34:33.206: INFO: Pod pod-2a4dc010-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:34:33.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bfrqg" for this suite.
Aug 12 12:34:39.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:34:39.239: INFO: namespace: e2e-tests-emptydir-bfrqg, resource: bindings, ignored listing per whitelist
Aug 12 12:34:39.297: INFO: namespace e2e-tests-emptydir-bfrqg deletion completed in 6.087160799s

• [SLOW TEST:10.349 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:34:39.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 12 12:34:58.231: INFO: Container started at 2020-08-12 12:34:42 +0000 UTC, pod became ready at 2020-08-12 12:34:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:34:58.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-cjcrq" for this suite.
Aug 12 12:35:20.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:35:20.328: INFO: namespace: e2e-tests-container-probe-cjcrq, resource: bindings, ignored listing per whitelist
Aug 12 12:35:20.330: INFO: namespace e2e-tests-container-probe-cjcrq deletion completed in 22.095351517s

• [SLOW TEST:41.033 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:35:20.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-48e89ef5-dc98-11ea-9b9c-0242ac11000c
STEP: Creating a pod to test consume configMaps
Aug 12 12:35:20.479: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-qrqp9" to be "success or failure"
Aug 12 12:35:20.483: INFO: Pod "pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.613817ms
Aug 12 12:35:22.488: INFO: Pod "pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009062848s
Aug 12 12:35:24.493: INFO: Pod "pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013749955s
STEP: Saw pod success
Aug 12 12:35:24.493: INFO: Pod "pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:35:24.496: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 12 12:35:24.602: INFO: Waiting for pod pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:35:24.654: INFO: Pod pod-projected-configmaps-48ead30a-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:35:24.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qrqp9" for this suite.
Aug 12 12:35:30.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:35:30.702: INFO: namespace: e2e-tests-projected-qrqp9, resource: bindings, ignored listing per whitelist
Aug 12 12:35:30.752: INFO: namespace e2e-tests-projected-qrqp9 deletion completed in 6.093460162s

• [SLOW TEST:10.422 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:35:30.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-9schp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9schp to expose endpoints map[]
Aug 12 12:35:30.924: INFO: Get endpoints failed (42.288654ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 12 12:35:31.942: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9schp exposes endpoints map[] (1.059938671s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-9schp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9schp to expose endpoints map[pod1:[80]]
Aug 12 12:35:36.300: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9schp exposes endpoints map[pod1:[80]] (4.100132123s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-9schp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9schp to expose endpoints map[pod1:[80] pod2:[80]]
Aug 12 12:35:40.705: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9schp exposes endpoints map[pod1:[80] pod2:[80]] (4.401070717s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-9schp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9schp to expose endpoints map[pod2:[80]]
Aug 12 12:35:41.787: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9schp exposes endpoints map[pod2:[80]] (1.077567102s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-9schp
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9schp to expose endpoints map[]
Aug 12 12:35:43.042: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9schp exposes endpoints map[] (1.250976528s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:35:43.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-9schp" for this suite.
Aug 12 12:35:49.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:35:49.496: INFO: namespace: e2e-tests-services-9schp, resource: bindings, ignored listing per whitelist
Aug 12 12:35:49.519: INFO: namespace e2e-tests-services-9schp deletion completed in 6.319099967s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:18.767 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:35:49.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 12 12:35:49.597: INFO: Waiting up to 5m0s for pod "downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-downward-api-gzhv4" to be "success or failure"
Aug 12 12:35:49.601: INFO: Pod "downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248035ms
Aug 12 12:35:51.605: INFO: Pod "downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008633038s
Aug 12 12:35:53.734: INFO: Pod "downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13694915s
STEP: Saw pod success
Aug 12 12:35:53.734: INFO: Pod "downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:35:53.736: INFO: Trying to get logs from node hunter-worker pod downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug 12 12:35:53.792: INFO: Waiting for pod downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:35:53.813: INFO: Pod downward-api-5a4630e4-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:35:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gzhv4" for this suite.
Aug 12 12:35:59.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:35:59.883: INFO: namespace: e2e-tests-downward-api-gzhv4, resource: bindings, ignored listing per whitelist
Aug 12 12:35:59.924: INFO: namespace e2e-tests-downward-api-gzhv4 deletion completed in 6.107885762s

• [SLOW TEST:10.405 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:35:59.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 12 12:36:06.156: INFO: Waiting up to 5m0s for pod "client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-pods-z2q6s" to be "success or failure"
Aug 12 12:36:06.164: INFO: Pod "client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280807ms
Aug 12 12:36:08.375: INFO: Pod "client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218363449s
Aug 12 12:36:10.379: INFO: Pod "client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.222694247s
STEP: Saw pod success
Aug 12 12:36:10.379: INFO: Pod "client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:36:10.382: INFO: Trying to get logs from node hunter-worker pod client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c container env3cont: 
STEP: delete the pod
Aug 12 12:36:10.520: INFO: Waiting for pod client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:36:10.536: INFO: Pod client-envvars-642574d7-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:36:10.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-z2q6s" for this suite.
Aug 12 12:36:58.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:36:58.605: INFO: namespace: e2e-tests-pods-z2q6s, resource: bindings, ignored listing per whitelist
Aug 12 12:36:58.638: INFO: namespace e2e-tests-pods-z2q6s deletion completed in 48.098374242s

• [SLOW TEST:58.713 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:36:58.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-83a51f29-dc98-11ea-9b9c-0242ac11000c
STEP: Creating a pod to test consume configMaps
Aug 12 12:36:59.020: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-zrnpt" to be "success or failure"
Aug 12 12:36:59.023: INFO: Pod "pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593435ms
Aug 12 12:37:01.027: INFO: Pod "pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006894475s
Aug 12 12:37:03.031: INFO: Pod "pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011443722s
STEP: Saw pod success
Aug 12 12:37:03.031: INFO: Pod "pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:37:03.035: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c container projected-configmap-volume-test: 
STEP: delete the pod
Aug 12 12:37:03.104: INFO: Waiting for pod pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:37:03.108: INFO: Pod pod-projected-configmaps-83a70bb0-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:37:03.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zrnpt" for this suite.
Aug 12 12:37:09.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:37:09.193: INFO: namespace: e2e-tests-projected-zrnpt, resource: bindings, ignored listing per whitelist
Aug 12 12:37:09.246: INFO: namespace e2e-tests-projected-zrnpt deletion completed in 6.13502258s

• [SLOW TEST:10.608 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:37:09.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5kvdv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 12 12:37:09.329: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 12 12:37:33.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.231:8080/dial?request=hostName&protocol=udp&host=10.244.1.251&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5kvdv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 12 12:37:33.559: INFO: >>> kubeConfig: /root/.kube/config
I0812 12:37:33.593249       6 log.go:172] (0xc00094f550) (0xc0015b5860) Create stream
I0812 12:37:33.593286       6 log.go:172] (0xc00094f550) (0xc0015b5860) Stream added, broadcasting: 1
I0812 12:37:33.595296       6 log.go:172] (0xc00094f550) Reply frame received for 1
I0812 12:37:33.595334       6 log.go:172] (0xc00094f550) (0xc001340c80) Create stream
I0812 12:37:33.595349       6 log.go:172] (0xc00094f550) (0xc001340c80) Stream added, broadcasting: 3
I0812 12:37:33.596478       6 log.go:172] (0xc00094f550) Reply frame received for 3
I0812 12:37:33.596534       6 log.go:172] (0xc00094f550) (0xc0026ba000) Create stream
I0812 12:37:33.596561       6 log.go:172] (0xc00094f550) (0xc0026ba000) Stream added, broadcasting: 5
I0812 12:37:33.597735       6 log.go:172] (0xc00094f550) Reply frame received for 5
I0812 12:37:33.687342       6 log.go:172] (0xc00094f550) Data frame received for 3
I0812 12:37:33.687388       6 log.go:172] (0xc001340c80) (3) Data frame handling
I0812 12:37:33.687409       6 log.go:172] (0xc001340c80) (3) Data frame sent
I0812 12:37:33.687765       6 log.go:172] (0xc00094f550) Data frame received for 3
I0812 12:37:33.687812       6 log.go:172] (0xc001340c80) (3) Data frame handling
I0812 12:37:33.687847       6 log.go:172] (0xc00094f550) Data frame received for 5
I0812 12:37:33.687860       6 log.go:172] (0xc0026ba000) (5) Data frame handling
I0812 12:37:33.689219       6 log.go:172] (0xc00094f550) Data frame received for 1
I0812 12:37:33.689232       6 log.go:172] (0xc0015b5860) (1) Data frame handling
I0812 12:37:33.689242       6 log.go:172] (0xc0015b5860) (1) Data frame sent
I0812 12:37:33.689256       6 log.go:172] (0xc00094f550) (0xc0015b5860) Stream removed, broadcasting: 1
I0812 12:37:33.689291       6 log.go:172] (0xc00094f550) Go away received
I0812 12:37:33.689356       6 log.go:172] (0xc00094f550) (0xc0015b5860) Stream removed, broadcasting: 1
I0812 12:37:33.689380       6 log.go:172] (0xc00094f550) (0xc001340c80) Stream removed, broadcasting: 3
I0812 12:37:33.689391       6 log.go:172] (0xc00094f550) (0xc0026ba000) Stream removed, broadcasting: 5
Aug 12 12:37:33.689: INFO: Waiting for endpoints: map[]
Aug 12 12:37:33.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.231:8080/dial?request=hostName&protocol=udp&host=10.244.2.230&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5kvdv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 12 12:37:33.692: INFO: >>> kubeConfig: /root/.kube/config
I0812 12:37:33.721619       6 log.go:172] (0xc00094fad0) (0xc001e38000) Create stream
I0812 12:37:33.721666       6 log.go:172] (0xc00094fad0) (0xc001e38000) Stream added, broadcasting: 1
I0812 12:37:33.723726       6 log.go:172] (0xc00094fad0) Reply frame received for 1
I0812 12:37:33.723762       6 log.go:172] (0xc00094fad0) (0xc001340d20) Create stream
I0812 12:37:33.723779       6 log.go:172] (0xc00094fad0) (0xc001340d20) Stream added, broadcasting: 3
I0812 12:37:33.724456       6 log.go:172] (0xc00094fad0) Reply frame received for 3
I0812 12:37:33.724479       6 log.go:172] (0xc00094fad0) (0xc0026ba0a0) Create stream
I0812 12:37:33.724487       6 log.go:172] (0xc00094fad0) (0xc0026ba0a0) Stream added, broadcasting: 5
I0812 12:37:33.725408       6 log.go:172] (0xc00094fad0) Reply frame received for 5
I0812 12:37:33.801487       6 log.go:172] (0xc00094fad0) Data frame received for 3
I0812 12:37:33.801521       6 log.go:172] (0xc001340d20) (3) Data frame handling
I0812 12:37:33.801540       6 log.go:172] (0xc001340d20) (3) Data frame sent
I0812 12:37:33.801998       6 log.go:172] (0xc00094fad0) Data frame received for 3
I0812 12:37:33.802026       6 log.go:172] (0xc001340d20) (3) Data frame handling
I0812 12:37:33.802132       6 log.go:172] (0xc00094fad0) Data frame received for 5
I0812 12:37:33.802150       6 log.go:172] (0xc0026ba0a0) (5) Data frame handling
I0812 12:37:33.803776       6 log.go:172] (0xc00094fad0) Data frame received for 1
I0812 12:37:33.803789       6 log.go:172] (0xc001e38000) (1) Data frame handling
I0812 12:37:33.803794       6 log.go:172] (0xc001e38000) (1) Data frame sent
I0812 12:37:33.803801       6 log.go:172] (0xc00094fad0) (0xc001e38000) Stream removed, broadcasting: 1
I0812 12:37:33.803899       6 log.go:172] (0xc00094fad0) (0xc001e38000) Stream removed, broadcasting: 1
I0812 12:37:33.803937       6 log.go:172] (0xc00094fad0) (0xc001340d20) Stream removed, broadcasting: 3
I0812 12:37:33.803951       6 log.go:172] (0xc00094fad0) (0xc0026ba0a0) Stream removed, broadcasting: 5
I0812 12:37:33.803983       6 log.go:172] (0xc00094fad0) Go away received
Aug 12 12:37:33.803: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:37:33.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-5kvdv" for this suite.
Aug 12 12:37:57.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:37:57.961: INFO: namespace: e2e-tests-pod-network-test-5kvdv, resource: bindings, ignored listing per whitelist
Aug 12 12:37:57.963: INFO: namespace e2e-tests-pod-network-test-5kvdv deletion completed in 24.15469643s

• [SLOW TEST:48.716 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:37:57.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:38:04.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-b2zcz" for this suite.
Aug 12 12:38:10.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:38:10.428: INFO: namespace: e2e-tests-namespaces-b2zcz, resource: bindings, ignored listing per whitelist
Aug 12 12:38:10.481: INFO: namespace e2e-tests-namespaces-b2zcz deletion completed in 6.079168744s
STEP: Destroying namespace "e2e-tests-nsdeletetest-bv9gk" for this suite.
Aug 12 12:38:10.483: INFO: Namespace e2e-tests-nsdeletetest-bv9gk was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-74tks" for this suite.
Aug 12 12:38:16.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:38:16.509: INFO: namespace: e2e-tests-nsdeletetest-74tks, resource: bindings, ignored listing per whitelist
Aug 12 12:38:16.548: INFO: namespace e2e-tests-nsdeletetest-74tks deletion completed in 6.064616226s

• [SLOW TEST:18.585 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:38:16.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Aug 12 12:38:16.635: INFO: Waiting up to 5m0s for pod "var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-var-expansion-tc6nd" to be "success or failure"
Aug 12 12:38:16.651: INFO: Pod "var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.967413ms
Aug 12 12:38:18.655: INFO: Pod "var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01925863s
Aug 12 12:38:20.681: INFO: Pod "var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045296914s
STEP: Saw pod success
Aug 12 12:38:20.681: INFO: Pod "var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:38:20.683: INFO: Trying to get logs from node hunter-worker pod var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c container dapi-container: 
STEP: delete the pod
Aug 12 12:38:20.696: INFO: Waiting for pod var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:38:20.701: INFO: Pod var-expansion-b1eab0a3-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:38:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-tc6nd" for this suite.
Aug 12 12:38:26.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:38:26.756: INFO: namespace: e2e-tests-var-expansion-tc6nd, resource: bindings, ignored listing per whitelist
Aug 12 12:38:26.775: INFO: namespace e2e-tests-var-expansion-tc6nd deletion completed in 6.070216626s

• [SLOW TEST:10.227 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:38:26.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 12 12:38:26.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-bfjst" to be "success or failure"
Aug 12 12:38:26.894: INFO: Pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.461893ms
Aug 12 12:38:29.083: INFO: Pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206255051s
Aug 12 12:38:31.131: INFO: Pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253693089s
Aug 12 12:38:33.400: INFO: Pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.52297228s
STEP: Saw pod success
Aug 12 12:38:33.400: INFO: Pod "downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:38:33.402: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c container client-container: 
STEP: delete the pod
Aug 12 12:38:33.815: INFO: Waiting for pod downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:38:33.888: INFO: Pod downwardapi-volume-b8053be6-dc98-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:38:33.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bfjst" for this suite.
Aug 12 12:38:40.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:38:40.083: INFO: namespace: e2e-tests-projected-bfjst, resource: bindings, ignored listing per whitelist
Aug 12 12:38:40.157: INFO: namespace e2e-tests-projected-bfjst deletion completed in 6.115223333s

• [SLOW TEST:13.382 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:38:40.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-92jld
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Aug 12 12:38:40.306: INFO: Found 0 stateful pods, waiting for 3
Aug 12 12:38:50.329: INFO: Found 2 stateful pods, waiting for 3
Aug 12 12:39:00.311: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 12 12:39:00.311: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 12 12:39:00.311: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 12 12:39:00.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-92jld ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 12 12:39:00.604: INFO: stderr: "I0812 12:39:00.451476    3791 log.go:172] (0xc000138840) (0xc000603400) Create stream\nI0812 12:39:00.451533    3791 log.go:172] (0xc000138840) (0xc000603400) Stream added, broadcasting: 1\nI0812 12:39:00.454158    3791 log.go:172] (0xc000138840) Reply frame received for 1\nI0812 12:39:00.454218    3791 log.go:172] (0xc000138840) (0xc0007be000) Create stream\nI0812 12:39:00.454234    3791 log.go:172] (0xc000138840) (0xc0007be000) Stream added, broadcasting: 3\nI0812 12:39:00.455407    3791 log.go:172] (0xc000138840) Reply frame received for 3\nI0812 12:39:00.455451    3791 log.go:172] (0xc000138840) (0xc0006e2000) Create stream\nI0812 12:39:00.455465    3791 log.go:172] (0xc000138840) (0xc0006e2000) Stream added, broadcasting: 5\nI0812 12:39:00.456491    3791 log.go:172] (0xc000138840) Reply frame received for 5\nI0812 12:39:00.593483    3791 log.go:172] (0xc000138840) Data frame received for 5\nI0812 12:39:00.593534    3791 log.go:172] (0xc0006e2000) (5) Data frame handling\nI0812 12:39:00.593584    3791 log.go:172] (0xc000138840) Data frame received for 3\nI0812 12:39:00.593703    3791 log.go:172] (0xc0007be000) (3) Data frame handling\nI0812 12:39:00.593750    3791 log.go:172] (0xc0007be000) (3) Data frame sent\nI0812 12:39:00.593772    3791 log.go:172] (0xc000138840) Data frame received for 3\nI0812 12:39:00.593789    3791 log.go:172] (0xc0007be000) (3) Data frame handling\nI0812 12:39:00.596638    3791 log.go:172] (0xc000138840) Data frame received for 1\nI0812 12:39:00.596671    3791 log.go:172] (0xc000603400) (1) Data frame handling\nI0812 12:39:00.596694    3791 log.go:172] (0xc000603400) (1) Data frame sent\nI0812 12:39:00.596713    3791 log.go:172] (0xc000138840) (0xc000603400) Stream removed, broadcasting: 1\nI0812 12:39:00.596864    3791 log.go:172] (0xc000138840) Go away received\nI0812 12:39:00.597137    3791 log.go:172] (0xc000138840) (0xc000603400) Stream removed, broadcasting: 1\nI0812 12:39:00.597189    3791 log.go:172] (0xc000138840) (0xc0007be000) Stream removed, broadcasting: 3\nI0812 12:39:00.597227    3791 log.go:172] (0xc000138840) (0xc0006e2000) Stream removed, broadcasting: 5\n"
Aug 12 12:39:00.604: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 12 12:39:00.604: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 12 12:39:10.715: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 12 12:39:20.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-92jld ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 12 12:39:20.952: INFO: stderr: "I0812 12:39:20.894093    3814 log.go:172] (0xc00014c790) (0xc0006592c0) Create stream\nI0812 12:39:20.894136    3814 log.go:172] (0xc00014c790) (0xc0006592c0) Stream added, broadcasting: 1\nI0812 12:39:20.896082    3814 log.go:172] (0xc00014c790) Reply frame received for 1\nI0812 12:39:20.896126    3814 log.go:172] (0xc00014c790) (0xc000720000) Create stream\nI0812 12:39:20.896147    3814 log.go:172] (0xc00014c790) (0xc000720000) Stream added, broadcasting: 3\nI0812 12:39:20.896836    3814 log.go:172] (0xc00014c790) Reply frame received for 3\nI0812 12:39:20.896850    3814 log.go:172] (0xc00014c790) (0xc000659360) Create stream\nI0812 12:39:20.896856    3814 log.go:172] (0xc00014c790) (0xc000659360) Stream added, broadcasting: 5\nI0812 12:39:20.897557    3814 log.go:172] (0xc00014c790) Reply frame received for 5\nI0812 12:39:20.944054    3814 log.go:172] (0xc00014c790) Data frame received for 3\nI0812 12:39:20.944079    3814 log.go:172] (0xc000720000) (3) Data frame handling\nI0812 12:39:20.944092    3814 log.go:172] (0xc000720000) (3) Data frame sent\nI0812 12:39:20.944097    3814 log.go:172] (0xc00014c790) Data frame received for 3\nI0812 12:39:20.944101    3814 log.go:172] (0xc000720000) (3) Data frame handling\nI0812 12:39:20.944122    3814 log.go:172] (0xc00014c790) Data frame received for 5\nI0812 12:39:20.944126    3814 log.go:172] (0xc000659360) (5) Data frame handling\nI0812 12:39:20.945674    3814 log.go:172] (0xc00014c790) Data frame received for 1\nI0812 12:39:20.945695    3814 log.go:172] (0xc0006592c0) (1) Data frame handling\nI0812 12:39:20.945710    3814 log.go:172] (0xc0006592c0) (1) Data frame sent\nI0812 12:39:20.945811    3814 log.go:172] (0xc00014c790) (0xc0006592c0) Stream removed, broadcasting: 1\nI0812 12:39:20.945830    3814 log.go:172] (0xc00014c790) Go away received\nI0812 12:39:20.946042    3814 log.go:172] (0xc00014c790) (0xc0006592c0) Stream removed, broadcasting: 1\nI0812 12:39:20.946063    3814 log.go:172] (0xc00014c790) (0xc000720000) Stream removed, broadcasting: 3\nI0812 12:39:20.946079    3814 log.go:172] (0xc00014c790) (0xc000659360) Stream removed, broadcasting: 5\n"
Aug 12 12:39:20.952: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 12 12:39:20.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 12 12:39:30.974: INFO: Waiting for StatefulSet e2e-tests-statefulset-92jld/ss2 to complete update
Aug 12 12:39:30.974: INFO: Waiting for Pod e2e-tests-statefulset-92jld/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 12 12:39:30.974: INFO: Waiting for Pod e2e-tests-statefulset-92jld/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 12 12:39:40.981: INFO: Waiting for StatefulSet e2e-tests-statefulset-92jld/ss2 to complete update
Aug 12 12:39:40.981: INFO: Waiting for Pod e2e-tests-statefulset-92jld/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 12 12:39:50.981: INFO: Waiting for StatefulSet e2e-tests-statefulset-92jld/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 12 12:40:00.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-92jld ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 12 12:40:01.256: INFO: stderr: "I0812 12:40:01.106885    3836 log.go:172] (0xc000784160) (0xc0006fa6e0) Create stream\nI0812 12:40:01.106934    3836 log.go:172] (0xc000784160) (0xc0006fa6e0) Stream added, broadcasting: 1\nI0812 12:40:01.109120    3836 log.go:172] (0xc000784160) Reply frame received for 1\nI0812 12:40:01.109161    3836 log.go:172] (0xc000784160) (0xc000690000) Create stream\nI0812 12:40:01.109175    3836 log.go:172] (0xc000784160) (0xc000690000) Stream added, broadcasting: 3\nI0812 12:40:01.109976    3836 log.go:172] (0xc000784160) Reply frame received for 3\nI0812 12:40:01.110006    3836 log.go:172] (0xc000784160) (0xc000690140) Create stream\nI0812 12:40:01.110023    3836 log.go:172] (0xc000784160) (0xc000690140) Stream added, broadcasting: 5\nI0812 12:40:01.110689    3836 log.go:172] (0xc000784160) Reply frame received for 5\nI0812 12:40:01.250400    3836 log.go:172] (0xc000784160) Data frame received for 3\nI0812 12:40:01.250434    3836 log.go:172] (0xc000690000) (3) Data frame handling\nI0812 12:40:01.250461    3836 log.go:172] (0xc000690000) (3) Data frame sent\nI0812 12:40:01.250490    3836 log.go:172] (0xc000784160) Data frame received for 3\nI0812 12:40:01.250515    3836 log.go:172] (0xc000690000) (3) Data frame handling\nI0812 12:40:01.250763    3836 log.go:172] (0xc000784160) Data frame received for 5\nI0812 12:40:01.250786    3836 log.go:172] (0xc000690140) (5) Data frame handling\nI0812 12:40:01.251958    3836 log.go:172] (0xc000784160) Data frame received for 1\nI0812 12:40:01.251980    3836 log.go:172] (0xc0006fa6e0) (1) Data frame handling\nI0812 12:40:01.251993    3836 log.go:172] (0xc0006fa6e0) (1) Data frame sent\nI0812 12:40:01.252006    3836 log.go:172] (0xc000784160) (0xc0006fa6e0) Stream removed, broadcasting: 1\nI0812 12:40:01.252023    3836 log.go:172] (0xc000784160) Go away received\nI0812 12:40:01.252135    3836 log.go:172] (0xc000784160) (0xc0006fa6e0) Stream removed, broadcasting: 1\nI0812 12:40:01.252146    3836 log.go:172] (0xc000784160) (0xc000690000) Stream removed, broadcasting: 3\nI0812 12:40:01.252152    3836 log.go:172] (0xc000784160) (0xc000690140) Stream removed, broadcasting: 5\n"
Aug 12 12:40:01.256: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 12 12:40:01.256: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 12 12:40:11.287: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 12 12:40:21.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-92jld ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 12 12:40:21.549: INFO: stderr: "I0812 12:40:21.454053    3859 log.go:172] (0xc0007a42c0) (0xc000706640) Create stream\nI0812 12:40:21.454117    3859 log.go:172] (0xc0007a42c0) (0xc000706640) Stream added, broadcasting: 1\nI0812 12:40:21.455997    3859 log.go:172] (0xc0007a42c0) Reply frame received for 1\nI0812 12:40:21.456047    3859 log.go:172] (0xc0007a42c0) (0xc00023cd20) Create stream\nI0812 12:40:21.456064    3859 log.go:172] (0xc0007a42c0) (0xc00023cd20) Stream added, broadcasting: 3\nI0812 12:40:21.457150    3859 log.go:172] (0xc0007a42c0) Reply frame received for 3\nI0812 12:40:21.457221    3859 log.go:172] (0xc0007a42c0) (0xc0007066e0) Create stream\nI0812 12:40:21.457240    3859 log.go:172] (0xc0007a42c0) (0xc0007066e0) Stream added, broadcasting: 5\nI0812 12:40:21.458106    3859 log.go:172] (0xc0007a42c0) Reply frame received for 5\nI0812 12:40:21.544710    3859 log.go:172] (0xc0007a42c0) Data frame received for 5\nI0812 12:40:21.544880    3859 log.go:172] (0xc0007066e0) (5) Data frame handling\nI0812 12:40:21.544919    3859 log.go:172] (0xc0007a42c0) Data frame received for 3\nI0812 12:40:21.544939    3859 log.go:172] (0xc00023cd20) (3) Data frame handling\nI0812 12:40:21.544963    3859 log.go:172] (0xc00023cd20) (3) Data frame sent\nI0812 12:40:21.544993    3859 log.go:172] (0xc0007a42c0) Data frame received for 3\nI0812 12:40:21.545011    3859 log.go:172] (0xc00023cd20) (3) Data frame handling\nI0812 12:40:21.545746    3859 log.go:172] (0xc0007a42c0) Data frame received for 1\nI0812 12:40:21.545761    3859 log.go:172] (0xc000706640) (1) Data frame handling\nI0812 12:40:21.545768    3859 log.go:172] (0xc000706640) (1) Data frame sent\nI0812 12:40:21.545778    3859 log.go:172] (0xc0007a42c0) (0xc000706640) Stream removed, broadcasting: 1\nI0812 12:40:21.545795    3859 log.go:172] (0xc0007a42c0) Go away received\nI0812 12:40:21.546031    3859 log.go:172] (0xc0007a42c0) (0xc000706640) Stream removed, broadcasting: 1\nI0812 12:40:21.546048    3859 log.go:172] (0xc0007a42c0) (0xc00023cd20) Stream removed, broadcasting: 3\nI0812 12:40:21.546058    3859 log.go:172] (0xc0007a42c0) (0xc0007066e0) Stream removed, broadcasting: 5\n"
Aug 12 12:40:21.549: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 12 12:40:21.549: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 12 12:40:31.584: INFO: Waiting for StatefulSet e2e-tests-statefulset-92jld/ss2 to complete update
Aug 12 12:40:31.584: INFO: Waiting for Pod e2e-tests-statefulset-92jld/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 12 12:40:31.584: INFO: Waiting for Pod e2e-tests-statefulset-92jld/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 12 12:40:41.590: INFO: Waiting for StatefulSet e2e-tests-statefulset-92jld/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 12 12:40:51.592: INFO: Deleting all statefulset in ns e2e-tests-statefulset-92jld
Aug 12 12:40:51.595: INFO: Scaling statefulset ss2 to 0
Aug 12 12:41:11.614: INFO: Waiting for statefulset status.replicas updated to 0
Aug 12 12:41:11.617: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:41:11.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-92jld" for this suite.
Aug 12 12:41:19.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:41:19.711: INFO: namespace: e2e-tests-statefulset-92jld, resource: bindings, ignored listing per whitelist
Aug 12 12:41:19.717: INFO: namespace e2e-tests-statefulset-92jld deletion completed in 8.079698058s

• [SLOW TEST:159.560 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:41:19.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 12 12:41:19.892: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1f1d16f2-dc99-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc0010ac4c2), BlockOwnerDeletion:(*bool)(0xc0010ac4c3)}}
Aug 12 12:41:19.947: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1f1a1ef9-dc99-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc00241bab2), BlockOwnerDeletion:(*bool)(0xc00241bab3)}}
Aug 12 12:41:19.957: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1f1a7ad4-dc99-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001688172), BlockOwnerDeletion:(*bool)(0xc001688173)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:41:24.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mt9fg" for this suite.
Aug 12 12:41:31.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:41:31.080: INFO: namespace: e2e-tests-gc-mt9fg, resource: bindings, ignored listing per whitelist
Aug 12 12:41:31.082: INFO: namespace e2e-tests-gc-mt9fg deletion completed in 6.081570891s

• [SLOW TEST:11.364 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:41:31.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 12 12:41:31.192: INFO: Waiting up to 5m0s for pod "pod-25e1b641-dc99-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-emptydir-m6vdq" to be "success or failure"
Aug 12 12:41:31.215: INFO: Pod "pod-25e1b641-dc99-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.917764ms
Aug 12 12:41:33.218: INFO: Pod "pod-25e1b641-dc99-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025983416s
Aug 12 12:41:35.222: INFO: Pod "pod-25e1b641-dc99-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029509483s
STEP: Saw pod success
Aug 12 12:41:35.222: INFO: Pod "pod-25e1b641-dc99-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:41:35.224: INFO: Trying to get logs from node hunter-worker pod pod-25e1b641-dc99-11ea-9b9c-0242ac11000c container test-container: 
STEP: delete the pod
Aug 12 12:41:35.246: INFO: Waiting for pod pod-25e1b641-dc99-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:41:35.251: INFO: Pod pod-25e1b641-dc99-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:41:35.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-m6vdq" for this suite.
Aug 12 12:41:41.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:41:41.286: INFO: namespace: e2e-tests-emptydir-m6vdq, resource: bindings, ignored listing per whitelist
Aug 12 12:41:41.326: INFO: namespace e2e-tests-emptydir-m6vdq deletion completed in 6.072382037s

• [SLOW TEST:10.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:41:41.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 12 12:41:41.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c" in namespace "e2e-tests-projected-dl546" to be "success or failure"
Aug 12 12:41:41.407: INFO: Pod "downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538943ms
Aug 12 12:41:43.410: INFO: Pod "downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007159533s
Aug 12 12:41:45.413: INFO: Pod "downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010733093s
STEP: Saw pod success
Aug 12 12:41:45.413: INFO: Pod "downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c" satisfied condition "success or failure"
Aug 12 12:41:45.416: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c container client-container: 
STEP: delete the pod
Aug 12 12:41:45.447: INFO: Waiting for pod downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c to disappear
Aug 12 12:41:45.461: INFO: Pod downwardapi-volume-2bf6c9a4-dc99-11ea-9b9c-0242ac11000c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:41:45.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dl546" for this suite.
Aug 12 12:41:51.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:41:51.511: INFO: namespace: e2e-tests-projected-dl546, resource: bindings, ignored listing per whitelist
Aug 12 12:41:51.586: INFO: namespace e2e-tests-projected-dl546 deletion completed in 6.122942755s

• [SLOW TEST:10.260 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 12 12:41:51.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 12 12:41:56.249: INFO: Successfully updated pod "pod-update-321b740d-dc99-11ea-9b9c-0242ac11000c"
STEP: verifying the updated pod is in kubernetes
Aug 12 12:41:56.303: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 12 12:41:56.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-78hp5" for this suite.
Aug 12 12:42:18.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 12 12:42:18.380: INFO: namespace: e2e-tests-pods-78hp5, resource: bindings, ignored listing per whitelist
Aug 12 12:42:18.408: INFO: namespace e2e-tests-pods-78hp5 deletion completed in 22.103328183s

• [SLOW TEST:26.821 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSAug 12 12:42:18.408: INFO: Running AfterSuite actions on all nodes
Aug 12 12:42:18.408: INFO: Running AfterSuite actions on node 1
Aug 12 12:42:18.409: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6914.268 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS