I0209 10:47:14.600979 8 e2e.go:224] Starting e2e run "881ae44a-4b29-11ea-aa78-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581245233 - Will randomize all specs Will run 201 of 2164 specs Feb 9 10:47:14.900: INFO: >>> kubeConfig: /root/.kube/config Feb 9 10:47:14.903: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 9 10:47:14.926: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 9 10:47:14.965: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 9 10:47:14.965: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 9 10:47:14.965: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 9 10:47:14.974: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 9 10:47:14.974: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 9 10:47:14.974: INFO: e2e test version: v1.13.12 Feb 9 10:47:14.976: INFO: kube-apiserver version: v1.13.8 S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:47:14.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Feb 9 10:47:15.145: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-j24q8 Feb 9 10:47:27.330: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-j24q8 STEP: checking the pod's current state and verifying that restartCount is present Feb 9 10:47:27.338: INFO: Initial restart count of pod liveness-http is 0 Feb 9 10:47:49.542: INFO: Restart count of pod e2e-tests-container-probe-j24q8/liveness-http is now 1 (22.203980058s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:47:49.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j24q8" for this suite. Feb 9 10:47:55.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:47:55.950: INFO: namespace: e2e-tests-container-probe-j24q8, resource: bindings, ignored listing per whitelist Feb 9 10:47:55.953: INFO: namespace e2e-tests-container-probe-j24q8 deletion completed in 6.342199312s • [SLOW TEST:40.978 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:47:55.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 9 10:47:56.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lqdt5' Feb 9 10:47:57.936: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 9 10:47:57.936: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 9 10:48:00.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lqdt5' Feb 9 10:48:00.445: INFO: stderr: "" Feb 9 10:48:00.446: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:48:00.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lqdt5" for this suite. Feb 9 10:48:06.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:48:06.732: INFO: namespace: e2e-tests-kubectl-lqdt5, resource: bindings, ignored listing per whitelist Feb 9 10:48:06.798: INFO: namespace e2e-tests-kubectl-lqdt5 deletion completed in 6.334379398s • [SLOW TEST:10.844 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:48:06.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-22w9s/configmap-test-a7e287a1-4b29-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 10:48:07.109: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-22w9s" to be "success or failure" Feb 9 10:48:07.179: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.059073ms Feb 9 10:48:09.283: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173593063s Feb 9 10:48:11.302: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192641798s Feb 9 10:48:13.497: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387571759s Feb 9 10:48:15.885: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775600209s Feb 9 10:48:17.907: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.797510506s Feb 9 10:48:19.923: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.813807332s STEP: Saw pod success Feb 9 10:48:19.923: INFO: Pod "pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 10:48:19.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005 container env-test: STEP: delete the pod Feb 9 10:48:20.228: INFO: Waiting for pod pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005 to disappear Feb 9 10:48:20.250: INFO: Pod pod-configmaps-a7e3f33f-4b29-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:48:20.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-22w9s" for this suite. Feb 9 10:48:30.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:48:30.678: INFO: namespace: e2e-tests-configmap-22w9s, resource: bindings, ignored listing per whitelist Feb 9 10:48:30.678: INFO: namespace e2e-tests-configmap-22w9s deletion completed in 10.417979744s • [SLOW TEST:23.880 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:48:30.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 10:48:30.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-cjchm" to be "success or failure" Feb 9 10:48:30.905: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.850668ms Feb 9 10:48:32.944: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063529465s Feb 9 10:48:34.976: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095206796s Feb 9 10:48:37.042: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161217693s Feb 9 10:48:39.063: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183048846s Feb 9 10:48:41.095: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214536509s STEP: Saw pod success Feb 9 10:48:41.095: INFO: Pod "downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 10:48:41.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 10:48:41.323: INFO: Waiting for pod downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005 to disappear Feb 9 10:48:41.329: INFO: Pod downwardapi-volume-b60bc7c3-4b29-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:48:41.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cjchm" for this suite. Feb 9 10:48:47.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:48:47.522: INFO: namespace: e2e-tests-projected-cjchm, resource: bindings, ignored listing per whitelist Feb 9 10:48:47.539: INFO: namespace e2e-tests-projected-cjchm deletion completed in 6.20294974s • [SLOW TEST:16.861 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:48:47.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 9 10:48:48.032: INFO: Waiting up to 5m0s for pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-zg6gb" to be "success or failure" Feb 9 10:48:48.042: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.210437ms Feb 9 10:48:50.055: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022846821s Feb 9 10:48:52.070: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037369234s Feb 9 10:48:54.100: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067492526s Feb 9 10:48:56.114: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081615771s Feb 9 10:48:58.162: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129885787s STEP: Saw pod success Feb 9 10:48:58.163: INFO: Pod "downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 10:48:58.170: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 10:48:58.400: INFO: Waiting for pod downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005 to disappear Feb 9 10:48:58.452: INFO: Pod downward-api-c04c1a52-4b29-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:48:58.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zg6gb" for this suite. Feb 9 10:49:04.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:49:05.016: INFO: namespace: e2e-tests-downward-api-zg6gb, resource: bindings, ignored listing per whitelist Feb 9 10:49:05.061: INFO: namespace e2e-tests-downward-api-zg6gb deletion completed in 6.495431509s • [SLOW TEST:17.521 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:49:05.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 9 10:49:05.376: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-s8d9z" to be "success or failure" Feb 9 10:49:05.385: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589946ms Feb 9 10:49:07.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023855752s Feb 9 10:49:09.416: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039242004s Feb 9 10:49:11.556: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179089861s Feb 9 10:49:13.567: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190924626s Feb 9 10:49:15.680: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.303489126s Feb 9 10:49:18.288: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.911972225s Feb 9 10:49:20.310: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.933335084s STEP: Saw pod success Feb 9 10:49:20.310: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 9 10:49:20.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 9 10:49:20.977: INFO: Waiting for pod pod-host-path-test to disappear Feb 9 10:49:21.003: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:49:21.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-s8d9z" for this suite. Feb 9 10:49:27.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:49:27.281: INFO: namespace: e2e-tests-hostpath-s8d9z, resource: bindings, ignored listing per whitelist Feb 9 10:49:27.581: INFO: namespace e2e-tests-hostpath-s8d9z deletion completed in 6.56495547s • [SLOW TEST:22.520 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:49:27.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 9 10:49:29.139: INFO: Pod name wrapped-volume-race-d890b2af-4b29-11ea-aa78-0242ac110005: Found 0 pods out of 5 Feb 9 10:49:34.167: INFO: Pod name wrapped-volume-race-d890b2af-4b29-11ea-aa78-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d890b2af-4b29-11ea-aa78-0242ac110005 in namespace e2e-tests-emptydir-wrapper-8wqsk, will wait for the garbage collector to delete the pods Feb 9 10:51:50.346: INFO: Deleting ReplicationController wrapped-volume-race-d890b2af-4b29-11ea-aa78-0242ac110005 took: 40.945659ms Feb 9 10:51:51.146: INFO: Terminating ReplicationController wrapped-volume-race-d890b2af-4b29-11ea-aa78-0242ac110005 pods took: 800.848544ms STEP: Creating RC which spawns configmap-volume pods Feb 9 10:52:43.646: INFO: Pod name wrapped-volume-race-4caf18c8-4b2a-11ea-aa78-0242ac110005: Found 0 pods out of 5 Feb 9 10:52:48.678: INFO: Pod name wrapped-volume-race-4caf18c8-4b2a-11ea-aa78-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4caf18c8-4b2a-11ea-aa78-0242ac110005 in namespace e2e-tests-emptydir-wrapper-8wqsk, will wait for the garbage collector to delete the pods Feb 9 10:55:03.459: INFO: Deleting ReplicationController wrapped-volume-race-4caf18c8-4b2a-11ea-aa78-0242ac110005 took: 13.066996ms Feb 9 10:55:03.960: INFO: Terminating ReplicationController wrapped-volume-race-4caf18c8-4b2a-11ea-aa78-0242ac110005 pods took: 500.549282ms STEP: Creating RC which spawns configmap-volume pods Feb 9 10:55:54.136: INFO: Pod name wrapped-volume-race-be2dece1-4b2a-11ea-aa78-0242ac110005: Found 0 pods out of 5 Feb 9 10:55:59.180: INFO: Pod name wrapped-volume-race-be2dece1-4b2a-11ea-aa78-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-be2dece1-4b2a-11ea-aa78-0242ac110005 in namespace e2e-tests-emptydir-wrapper-8wqsk, will wait for the garbage collector to delete the pods Feb 9 10:57:43.698: INFO: Deleting ReplicationController wrapped-volume-race-be2dece1-4b2a-11ea-aa78-0242ac110005 took: 101.460796ms Feb 9 10:57:44.000: INFO: Terminating ReplicationController wrapped-volume-race-be2dece1-4b2a-11ea-aa78-0242ac110005 pods took: 301.318565ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:58:35.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8wqsk" for this suite. Feb 9 10:58:45.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:58:45.340: INFO: namespace: e2e-tests-emptydir-wrapper-8wqsk, resource: bindings, ignored listing per whitelist Feb 9 10:58:45.453: INFO: namespace e2e-tests-emptydir-wrapper-8wqsk deletion completed in 10.21758767s • [SLOW TEST:557.871 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:58:45.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 10:58:45.872: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"248453d5-4b2b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001af0c62), BlockOwnerDeletion:(*bool)(0xc001af0c63)}} Feb 9 10:58:45.915: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"247f7f96-4b2b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001b5f0ba), BlockOwnerDeletion:(*bool)(0xc001b5f0bb)}} Feb 9 10:58:46.043: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2480644b-4b2b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001b4a362), BlockOwnerDeletion:(*bool)(0xc001b4a363)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:59:01.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ckfm5" for this suite. Feb 9 10:59:07.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:59:07.981: INFO: namespace: e2e-tests-gc-ckfm5, resource: bindings, ignored listing per whitelist Feb 9 10:59:08.039: INFO: namespace e2e-tests-gc-ckfm5 deletion completed in 6.383175219s • [SLOW TEST:22.586 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:59:08.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 10:59:08.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 9 10:59:08.398: INFO: stderr: "" Feb 9 10:59:08.398: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 10:59:08.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qtrgp" for this suite. Feb 9 10:59:14.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 10:59:14.691: INFO: namespace: e2e-tests-kubectl-qtrgp, resource: bindings, ignored listing per whitelist Feb 9 10:59:14.838: INFO: namespace e2e-tests-kubectl-qtrgp deletion completed in 6.419178266s • [SLOW TEST:6.799 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 10:59:14.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 9 10:59:15.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:17.159: INFO: stderr: "" Feb 9 10:59:17.159: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 10:59:17.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:17.439: INFO: stderr: "" Feb 9 10:59:17.439: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " Feb 9 10:59:17.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:17.607: INFO: stderr: "" Feb 9 10:59:17.607: INFO: stdout: "" Feb 9 10:59:17.607: INFO: update-demo-nautilus-4pkwq is created but not running Feb 9 10:59:22.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:22.787: INFO: stderr: "" Feb 9 10:59:22.787: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " Feb 9 10:59:22.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:22.941: INFO: stderr: "" Feb 9 10:59:22.942: INFO: stdout: "" Feb 9 10:59:22.942: INFO: update-demo-nautilus-4pkwq is created but not running Feb 9 10:59:27.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:28.124: INFO: stderr: "" Feb 9 10:59:28.124: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " Feb 9 10:59:28.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:28.216: INFO: stderr: "" Feb 9 10:59:28.216: INFO: stdout: "" Feb 9 10:59:28.216: INFO: update-demo-nautilus-4pkwq is created but not running Feb 9 10:59:33.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:33.389: INFO: stderr: "" Feb 9 10:59:33.389: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " Feb 9 10:59:33.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pkwq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:33.541: INFO: stderr: "" Feb 9 10:59:33.542: INFO: stdout: "true" Feb 9 10:59:33.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4pkwq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:33.747: INFO: stderr: "" Feb 9 10:59:33.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 10:59:33.747: INFO: validating pod update-demo-nautilus-4pkwq Feb 9 10:59:33.884: INFO: got data: { "image": "nautilus.jpg" } Feb 9 10:59:33.885: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 10:59:33.885: INFO: update-demo-nautilus-4pkwq is verified up and running Feb 9 10:59:33.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:34.115: INFO: stderr: "" Feb 9 10:59:34.115: INFO: stdout: "true" Feb 9 10:59:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:34.232: INFO: stderr: "" Feb 9 10:59:34.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 10:59:34.232: INFO: validating pod update-demo-nautilus-drv67 Feb 9 10:59:34.250: INFO: got data: { "image": "nautilus.jpg" } Feb 9 10:59:34.250: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 10:59:34.250: INFO: update-demo-nautilus-drv67 is verified up and running STEP: scaling down the replication controller Feb 9 10:59:34.253: INFO: scanned /root for discovery docs: Feb 9 10:59:34.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:35.590: INFO: stderr: "" Feb 9 10:59:35.590: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 10:59:35.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:35.788: INFO: stderr: "" Feb 9 10:59:35.788: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 9 10:59:40.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:40.926: INFO: stderr: "" Feb 9 10:59:40.926: INFO: stdout: "update-demo-nautilus-4pkwq update-demo-nautilus-drv67 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 9 10:59:45.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:46.143: INFO: stderr: "" Feb 9 10:59:46.143: INFO: stdout: "update-demo-nautilus-drv67 " Feb 9 10:59:46.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:46.285: INFO: stderr: "" Feb 9 10:59:46.285: INFO: stdout: "true" Feb 9 10:59:46.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:46.413: INFO: stderr: "" Feb 9 10:59:46.413: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 10:59:46.413: INFO: validating pod update-demo-nautilus-drv67 Feb 9 10:59:46.433: INFO: got data: { "image": "nautilus.jpg" } Feb 9 10:59:46.433: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 10:59:46.433: INFO: update-demo-nautilus-drv67 is verified up and running STEP: scaling up the replication controller Feb 9 10:59:46.439: INFO: scanned /root for discovery docs: Feb 9 10:59:46.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:47.798: INFO: stderr: "" Feb 9 10:59:47.798: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 10:59:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:48.064: INFO: stderr: "" Feb 9 10:59:48.064: INFO: stdout: "update-demo-nautilus-drv67 update-demo-nautilus-q5sqj " Feb 9 10:59:48.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:48.213: INFO: stderr: "" Feb 9 10:59:48.213: INFO: stdout: "true" Feb 9 10:59:48.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:48.358: INFO: stderr: "" Feb 9 10:59:48.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 10:59:48.358: INFO: validating pod update-demo-nautilus-drv67 Feb 9 10:59:48.370: INFO: got data: { "image": "nautilus.jpg" } Feb 9 10:59:48.371: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 10:59:48.371: INFO: update-demo-nautilus-drv67 is verified up and running Feb 9 10:59:48.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5sqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:48.483: INFO: stderr: "" Feb 9 10:59:48.483: INFO: stdout: "" Feb 9 10:59:48.483: INFO: update-demo-nautilus-q5sqj is created but not running Feb 9 10:59:53.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:53.770: INFO: stderr: "" Feb 9 10:59:53.770: INFO: stdout: "update-demo-nautilus-drv67 update-demo-nautilus-q5sqj " Feb 9 10:59:53.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:53.980: INFO: stderr: "" Feb 9 10:59:53.980: INFO: stdout: "true" Feb 9 10:59:53.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:54.177: INFO: stderr: "" Feb 9 10:59:54.178: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 10:59:54.178: INFO: validating pod update-demo-nautilus-drv67 Feb 9 10:59:54.200: INFO: got data: { "image": "nautilus.jpg" } Feb 9 10:59:54.200: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 10:59:54.200: INFO: update-demo-nautilus-drv67 is verified up and running Feb 9 10:59:54.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5sqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:54.320: INFO: stderr: "" Feb 9 10:59:54.320: INFO: stdout: "" Feb 9 10:59:54.320: INFO: update-demo-nautilus-q5sqj is created but not running Feb 9 10:59:59.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:59.599: INFO: stderr: "" Feb 9 10:59:59.599: INFO: stdout: "update-demo-nautilus-drv67 update-demo-nautilus-q5sqj " Feb 9 10:59:59.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 10:59:59.909: INFO: stderr: "" Feb 9 10:59:59.909: INFO: stdout: "true" Feb 9 10:59:59.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drv67 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 11:00:00.084: INFO: stderr: "" Feb 9 11:00:00.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 11:00:00.084: INFO: validating pod update-demo-nautilus-drv67 Feb 9 11:00:00.095: INFO: got data: { "image": "nautilus.jpg" } Feb 9 11:00:00.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 11:00:00.096: INFO: update-demo-nautilus-drv67 is verified up and running Feb 9 11:00:00.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5sqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 11:00:00.197: INFO: stderr: "" Feb 9 11:00:00.197: INFO: stdout: "true" Feb 9 11:00:00.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q5sqj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fhj8' Feb 9 11:00:00.326: INFO: stderr: "" Feb 9 11:00:00.326: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 11:00:00.326: INFO: validating pod update-demo-nautilus-q5sqj Feb 9 11:00:00.336: INFO: got data: { "image": "nautilus.jpg" } Feb 9 11:00:00.336: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 11:00:00.336: INFO: update-demo-nautilus-q5sqj is verified up and running STEP: using delete to clean up resources Feb 9 11:00:00.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7fhj8' Feb 9 11:00:00.535: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 11:00:00.535: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 9 11:00:00.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7fhj8' Feb 9 11:00:00.716: INFO: stderr: "No resources found.\n" Feb 9 11:00:00.716: INFO: stdout: "" Feb 9 11:00:00.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7fhj8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 9 11:00:00.941: INFO: stderr: "" Feb 9 11:00:00.941: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:00:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7fhj8" for this suite. Feb 9 11:00:25.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:00:25.671: INFO: namespace: e2e-tests-kubectl-7fhj8, resource: bindings, ignored listing per whitelist Feb 9 11:00:25.747: INFO: namespace e2e-tests-kubectl-7fhj8 deletion completed in 24.784355012s • [SLOW TEST:70.908 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:00:25.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-89brn [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-89brn STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-89brn Feb 9 11:00:26.127: INFO: Found 0 stateful pods, waiting for 1 Feb 9 11:00:36.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 9 11:00:46.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 9 11:00:46.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:00:46.964: INFO: stderr: "I0209 11:00:46.450071 858 log.go:172] (0xc0001386e0) (0xc0005c12c0) Create stream\nI0209 11:00:46.450533 858 log.go:172] (0xc0001386e0) (0xc0005c12c0) Stream added, broadcasting: 1\nI0209 11:00:46.464715 858 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0209 11:00:46.464783 858 log.go:172] (0xc0001386e0) (0xc000736000) Create stream\nI0209 11:00:46.464807 858 log.go:172] (0xc0001386e0) (0xc000736000) Stream added, broadcasting: 3\nI0209 11:00:46.468351 858 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0209 11:00:46.468416 858 log.go:172] (0xc0001386e0) (0xc00048e000) Create stream\nI0209 11:00:46.468437 858 log.go:172] (0xc0001386e0) (0xc00048e000) Stream added, broadcasting: 5\nI0209 11:00:46.472159 858 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0209 11:00:46.771478 858 log.go:172] (0xc0001386e0) Data frame received for 3\nI0209 11:00:46.771666 858 log.go:172] (0xc000736000) (3) Data frame handling\nI0209 11:00:46.771712 858 log.go:172] (0xc000736000) (3) Data frame sent\nI0209 11:00:46.950912 858 log.go:172] (0xc0001386e0) Data frame received for 1\nI0209 11:00:46.951132 858 log.go:172] (0xc0001386e0) (0xc00048e000) Stream removed, broadcasting: 5\nI0209 11:00:46.951173 858 log.go:172] (0xc0005c12c0) (1) Data frame handling\nI0209 11:00:46.951188 858 log.go:172] (0xc0005c12c0) (1) Data frame sent\nI0209 11:00:46.951206 858 log.go:172] (0xc0001386e0) (0xc000736000) Stream removed, broadcasting: 3\nI0209 11:00:46.951317 858 log.go:172] (0xc0001386e0) (0xc0005c12c0) Stream removed, broadcasting: 1\nI0209 11:00:46.951357 858 log.go:172] (0xc0001386e0) Go away received\nI0209 11:00:46.951892 858 log.go:172] (0xc0001386e0) (0xc0005c12c0) Stream removed, broadcasting: 1\nI0209 11:00:46.951920 858 log.go:172] (0xc0001386e0) (0xc000736000) Stream removed, broadcasting: 3\nI0209 11:00:46.951928 858 log.go:172] (0xc0001386e0) (0xc00048e000) Stream removed, broadcasting: 5\n" Feb 9 11:00:46.964: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:00:46.964: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:00:46.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 9 11:00:57.005: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:00:57.005: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:00:57.090: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999279s Feb 9 11:00:58.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.96028589s Feb 9 11:00:59.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.941894369s Feb 9 11:01:00.156: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.91138624s Feb 9 11:01:01.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.89429281s Feb 9 11:01:02.186: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.875455807s Feb 9 11:01:03.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.863917096s Feb 9 11:01:04.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.845479394s Feb 9 11:01:05.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.680633567s Feb 9 11:01:06.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 96.906516ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-89brn Feb 9 11:01:07.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:01:08.922: INFO: stderr: "I0209 11:01:08.244406 880 log.go:172] (0xc000708370) (0xc000728640) Create stream\nI0209 11:01:08.244954 880 log.go:172] (0xc000708370) (0xc000728640) Stream added, broadcasting: 1\nI0209 11:01:08.253093 880 log.go:172] (0xc000708370) Reply frame received for 1\nI0209 11:01:08.253201 880 log.go:172] (0xc000708370) (0xc0007286e0) Create stream\nI0209 11:01:08.253217 880 log.go:172] (0xc000708370) (0xc0007286e0) Stream added, broadcasting: 3\nI0209 11:01:08.254750 880 log.go:172] (0xc000708370) Reply frame received for 3\nI0209 11:01:08.254835 880 log.go:172] (0xc000708370) (0xc0005b0be0) Create stream\nI0209 11:01:08.254867 880 log.go:172] (0xc000708370) (0xc0005b0be0) Stream added, broadcasting: 5\nI0209 11:01:08.255953 880 log.go:172] (0xc000708370) Reply frame received for 5\nI0209 11:01:08.534261 880 log.go:172] (0xc000708370) Data frame received for 3\nI0209 11:01:08.534398 880 log.go:172] (0xc0007286e0) (3) Data frame handling\nI0209 11:01:08.534440 880 log.go:172] (0xc0007286e0) (3) Data frame sent\nI0209 11:01:08.897196 880 log.go:172] (0xc000708370) Data frame received for 1\nI0209 11:01:08.897423 880 log.go:172] (0xc000708370) (0xc0007286e0) Stream removed, broadcasting: 3\nI0209 11:01:08.898168 880 log.go:172] (0xc000728640) (1) Data frame handling\nI0209 11:01:08.898235 880 log.go:172] (0xc000728640) (1) Data frame sent\nI0209 11:01:08.898483 880 log.go:172] (0xc000708370) (0xc0005b0be0) Stream removed, broadcasting: 5\nI0209 11:01:08.898578 880 log.go:172] (0xc000708370) (0xc000728640) Stream removed, broadcasting: 1\nI0209 11:01:08.898606 880 log.go:172] (0xc000708370) Go away received\nI0209 11:01:08.899858 880 log.go:172] (0xc000708370) (0xc000728640) Stream removed, broadcasting: 1\nI0209 11:01:08.899895 880 log.go:172] (0xc000708370) (0xc0007286e0) Stream removed, broadcasting: 3\nI0209 11:01:08.899910 880 log.go:172] (0xc000708370) (0xc0005b0be0) Stream removed, broadcasting: 5\n" Feb 9 11:01:08.922: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:01:08.922: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:01:08.955: INFO: Found 2 stateful pods, waiting for 3 Feb 9 11:01:19.010: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:01:19.010: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:01:19.010: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 9 11:01:28.969: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:01:28.969: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:01:28.969: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 9 11:01:28.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:01:29.585: INFO: stderr: "I0209 11:01:29.248469 902 log.go:172] (0xc0001386e0) (0xc0005df220) Create stream\nI0209 11:01:29.248821 902 log.go:172] (0xc0001386e0) (0xc0005df220) Stream added, broadcasting: 1\nI0209 11:01:29.256883 902 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0209 11:01:29.256978 902 log.go:172] (0xc0001386e0) (0xc00073a000) Create stream\nI0209 11:01:29.256994 902 log.go:172] (0xc0001386e0) (0xc00073a000) Stream added, broadcasting: 3\nI0209 11:01:29.258664 902 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0209 11:01:29.258693 902 log.go:172] (0xc0001386e0) (0xc00073a0a0) Create stream\nI0209 11:01:29.258708 902 log.go:172] (0xc0001386e0) (0xc00073a0a0) Stream added, broadcasting: 5\nI0209 11:01:29.259726 902 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0209 11:01:29.423199 902 log.go:172] (0xc0001386e0) Data frame received for 3\nI0209 11:01:29.423485 902 log.go:172] (0xc00073a000) (3) Data frame handling\nI0209 11:01:29.423512 902 log.go:172] (0xc00073a000) (3) Data frame sent\nI0209 11:01:29.568808 902 log.go:172] (0xc0001386e0) (0xc00073a0a0) Stream removed, broadcasting: 5\nI0209 11:01:29.569210 902 log.go:172] (0xc0001386e0) Data frame received for 1\nI0209 11:01:29.569314 902 log.go:172] (0xc0001386e0) (0xc00073a000) Stream removed, broadcasting: 3\nI0209 11:01:29.569411 902 log.go:172] (0xc0005df220) (1) Data frame handling\nI0209 11:01:29.569443 902 log.go:172] (0xc0005df220) (1) Data frame sent\nI0209 11:01:29.569453 902 log.go:172] (0xc0001386e0) (0xc0005df220) Stream removed, broadcasting: 1\nI0209 11:01:29.569483 902 log.go:172] (0xc0001386e0) Go away received\nI0209 11:01:29.570920 902 log.go:172] (0xc0001386e0) (0xc0005df220) Stream removed, broadcasting: 1\nI0209 11:01:29.571061 902 log.go:172] (0xc0001386e0) (0xc00073a000) Stream removed, broadcasting: 3\nI0209 11:01:29.571075 902 log.go:172] (0xc0001386e0) (0xc00073a0a0) Stream removed, broadcasting: 5\n" Feb 9 11:01:29.586: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:01:29.586: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:01:29.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:01:30.287: INFO: stderr: "I0209 11:01:29.958432 925 log.go:172] (0xc000138790) (0xc0005cd360) Create stream\nI0209 11:01:29.959058 925 log.go:172] (0xc000138790) (0xc0005cd360) Stream added, broadcasting: 1\nI0209 11:01:29.966341 925 log.go:172] (0xc000138790) Reply frame received for 1\nI0209 11:01:29.966423 925 log.go:172] (0xc000138790) (0xc00072a000) Create stream\nI0209 11:01:29.966432 925 log.go:172] (0xc000138790) (0xc00072a000) Stream added, broadcasting: 3\nI0209 11:01:29.967486 925 log.go:172] (0xc000138790) Reply frame received for 3\nI0209 11:01:29.967515 925 log.go:172] (0xc000138790) (0xc0002ae000) Create stream\nI0209 11:01:29.967529 925 log.go:172] (0xc000138790) (0xc0002ae000) Stream added, broadcasting: 5\nI0209 11:01:29.968489 925 log.go:172] (0xc000138790) Reply frame received for 5\nI0209 11:01:30.149030 925 log.go:172] (0xc000138790) Data frame received for 3\nI0209 11:01:30.149107 925 log.go:172] (0xc00072a000) (3) Data frame handling\nI0209 11:01:30.149120 925 log.go:172] (0xc00072a000) (3) Data frame sent\nI0209 11:01:30.276655 925 log.go:172] (0xc000138790) (0xc00072a000) Stream removed, broadcasting: 3\nI0209 11:01:30.276851 925 log.go:172] (0xc000138790) Data frame received for 1\nI0209 11:01:30.276875 925 log.go:172] (0xc0005cd360) (1) Data frame handling\nI0209 11:01:30.276891 925 log.go:172] (0xc0005cd360) (1) Data frame sent\nI0209 11:01:30.276899 925 log.go:172] (0xc000138790) (0xc0005cd360) Stream removed, broadcasting: 1\nI0209 11:01:30.277106 925 log.go:172] (0xc000138790) (0xc0002ae000) Stream removed, broadcasting: 5\nI0209 11:01:30.277435 925 log.go:172] (0xc000138790) Go away received\nI0209 11:01:30.277525 925 log.go:172] (0xc000138790) (0xc0005cd360) Stream removed, broadcasting: 1\nI0209 11:01:30.277617 925 log.go:172] (0xc000138790) (0xc00072a000) Stream removed, broadcasting: 3\nI0209 11:01:30.277663 925 log.go:172] (0xc000138790) (0xc0002ae000) Stream removed, broadcasting: 5\n" Feb 9 11:01:30.287: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:01:30.287: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:01:30.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:01:31.145: INFO: stderr: "I0209 11:01:30.798202 946 log.go:172] (0xc00014c840) (0xc000613360) Create stream\nI0209 11:01:30.798757 946 log.go:172] (0xc00014c840) (0xc000613360) Stream added, broadcasting: 1\nI0209 11:01:30.817278 946 log.go:172] (0xc00014c840) Reply frame received for 1\nI0209 11:01:30.817355 946 log.go:172] (0xc00014c840) (0xc000736000) Create stream\nI0209 11:01:30.817382 946 log.go:172] (0xc00014c840) (0xc000736000) Stream added, broadcasting: 3\nI0209 11:01:30.819192 946 log.go:172] (0xc00014c840) Reply frame received for 3\nI0209 11:01:30.819256 946 log.go:172] (0xc00014c840) (0xc0006ac000) Create stream\nI0209 11:01:30.819294 946 log.go:172] (0xc00014c840) (0xc0006ac000) Stream added, broadcasting: 5\nI0209 11:01:30.822951 946 log.go:172] (0xc00014c840) Reply frame received for 5\nI0209 11:01:31.029245 946 log.go:172] (0xc00014c840) Data frame received for 3\nI0209 11:01:31.029320 946 log.go:172] (0xc000736000) (3) Data frame handling\nI0209 11:01:31.029350 946 log.go:172] (0xc000736000) (3) Data frame sent\nI0209 11:01:31.134937 946 log.go:172] (0xc00014c840) Data frame received for 1\nI0209 11:01:31.135044 946 log.go:172] (0xc00014c840) (0xc0006ac000) Stream removed, broadcasting: 5\nI0209 11:01:31.135071 946 log.go:172] (0xc000613360) (1) Data frame handling\nI0209 11:01:31.135080 946 log.go:172] (0xc000613360) (1) Data frame sent\nI0209 11:01:31.135117 946 log.go:172] (0xc00014c840) (0xc000736000) Stream removed, broadcasting: 3\nI0209 11:01:31.135137 946 log.go:172] (0xc00014c840) (0xc000613360) Stream removed, broadcasting: 1\nI0209 11:01:31.135146 946 log.go:172] (0xc00014c840) Go away received\nI0209 11:01:31.136090 946 log.go:172] (0xc00014c840) (0xc000613360) Stream removed, broadcasting: 1\nI0209 11:01:31.136118 946 log.go:172] (0xc00014c840) (0xc000736000) Stream removed, broadcasting: 3\nI0209 11:01:31.136124 946 log.go:172] (0xc00014c840) (0xc0006ac000) Stream removed, broadcasting: 5\n" Feb 9 11:01:31.145: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:01:31.145: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:01:31.145: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:01:31.160: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 9 11:01:41.224: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:01:41.224: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:01:41.224: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:01:41.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999562s Feb 9 11:01:42.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.930941774s Feb 9 11:01:44.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.889455789s Feb 9 11:01:45.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.814220393s Feb 9 11:01:46.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.787601973s Feb 9 11:01:47.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.75983679s Feb 9 11:01:48.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.738320004s Feb 9 11:01:49.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.692456645s Feb 9 11:01:50.644: INFO: Verifying statefulset ss doesn't scale past 3 for another 676.255739ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-89brn Feb 9 11:01:51.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:01:52.309: INFO: stderr: "I0209 11:01:51.938519 968 log.go:172] (0xc00070a370) (0xc000730640) Create stream\nI0209 11:01:51.938787 968 log.go:172] (0xc00070a370) (0xc000730640) Stream added, broadcasting: 1\nI0209 11:01:51.945488 968 log.go:172] (0xc00070a370) Reply frame received for 1\nI0209 11:01:51.945599 968 log.go:172] (0xc00070a370) (0xc00065cd20) Create stream\nI0209 11:01:51.945610 968 log.go:172] (0xc00070a370) (0xc00065cd20) Stream added, broadcasting: 3\nI0209 11:01:51.947025 968 log.go:172] (0xc00070a370) Reply frame received for 3\nI0209 11:01:51.947052 968 log.go:172] (0xc00070a370) (0xc0007306e0) Create stream\nI0209 11:01:51.947060 968 log.go:172] (0xc00070a370) (0xc0007306e0) Stream added, broadcasting: 5\nI0209 11:01:51.948346 968 log.go:172] (0xc00070a370) Reply frame received for 5\nI0209 11:01:52.089317 968 log.go:172] (0xc00070a370) Data frame received for 3\nI0209 11:01:52.089437 968 log.go:172] (0xc00065cd20) (3) Data frame handling\nI0209 11:01:52.089459 968 log.go:172] (0xc00065cd20) (3) Data frame sent\nI0209 11:01:52.296166 968 log.go:172] (0xc00070a370) Data frame received for 1\nI0209 11:01:52.296320 968 log.go:172] (0xc000730640) (1) Data frame handling\nI0209 11:01:52.296391 968 log.go:172] (0xc000730640) (1) Data frame sent\nI0209 11:01:52.297310 968 log.go:172] (0xc00070a370) (0xc000730640) Stream removed, broadcasting: 1\nI0209 11:01:52.297515 968 log.go:172] (0xc00070a370) (0xc00065cd20) Stream removed, broadcasting: 3\nI0209 11:01:52.297644 968 log.go:172] (0xc00070a370) (0xc0007306e0) Stream removed, broadcasting: 5\nI0209 11:01:52.297748 968 log.go:172] (0xc00070a370) Go away received\nI0209 11:01:52.297939 968 log.go:172] (0xc00070a370) (0xc000730640) Stream removed, broadcasting: 1\nI0209 11:01:52.297951 968 log.go:172] (0xc00070a370) (0xc00065cd20) Stream removed, broadcasting: 3\nI0209 11:01:52.297957 968 log.go:172] (0xc00070a370) (0xc0007306e0) Stream removed, broadcasting: 5\n" Feb 9 11:01:52.310: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:01:52.310: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:01:52.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:01:53.041: INFO: stderr: "I0209 11:01:52.474174 990 log.go:172] (0xc000716370) (0xc0005c52c0) Create stream\nI0209 11:01:52.474801 990 log.go:172] (0xc000716370) (0xc0005c52c0) Stream added, broadcasting: 1\nI0209 11:01:52.484260 990 log.go:172] (0xc000716370) Reply frame received for 1\nI0209 11:01:52.484395 990 log.go:172] (0xc000716370) (0xc000626000) Create stream\nI0209 11:01:52.484404 990 log.go:172] (0xc000716370) (0xc000626000) Stream added, broadcasting: 3\nI0209 11:01:52.487584 990 log.go:172] (0xc000716370) Reply frame received for 3\nI0209 11:01:52.487616 990 log.go:172] (0xc000716370) (0xc0001a4000) Create stream\nI0209 11:01:52.487628 990 log.go:172] (0xc000716370) (0xc0001a4000) Stream added, broadcasting: 5\nI0209 11:01:52.489810 990 log.go:172] (0xc000716370) Reply frame received for 5\nI0209 11:01:52.856343 990 log.go:172] (0xc000716370) Data frame received for 3\nI0209 11:01:52.856453 990 log.go:172] (0xc000626000) (3) Data frame handling\nI0209 11:01:52.856469 990 log.go:172] (0xc000626000) (3) Data frame sent\nI0209 11:01:53.027941 990 log.go:172] (0xc000716370) Data frame received for 1\nI0209 11:01:53.028109 990 log.go:172] (0xc000716370) (0xc000626000) Stream removed, broadcasting: 3\nI0209 11:01:53.028154 990 log.go:172] (0xc0005c52c0) (1) Data frame handling\nI0209 11:01:53.028176 990 log.go:172] (0xc0005c52c0) (1) Data frame sent\nI0209 11:01:53.028229 990 log.go:172] (0xc000716370) (0xc0001a4000) Stream removed, broadcasting: 5\nI0209 11:01:53.028252 990 log.go:172] (0xc000716370) (0xc0005c52c0) Stream removed, broadcasting: 1\nI0209 11:01:53.028271 990 log.go:172] (0xc000716370) Go away received\nI0209 11:01:53.028632 990 log.go:172] (0xc000716370) (0xc0005c52c0) Stream removed, broadcasting: 1\nI0209 11:01:53.028700 990 log.go:172] (0xc000716370) (0xc000626000) Stream removed, broadcasting: 3\nI0209 11:01:53.028713 990 log.go:172] (0xc000716370) (0xc0001a4000) Stream removed, broadcasting: 5\n" Feb 9 11:01:53.041: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:01:53.041: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:01:53.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-89brn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:01:53.602: INFO: stderr: "I0209 11:01:53.216923 1011 log.go:172] (0xc00072a370) (0xc0007ac640) Create stream\nI0209 11:01:53.217065 1011 log.go:172] (0xc00072a370) (0xc0007ac640) Stream added, broadcasting: 1\nI0209 11:01:53.221687 1011 log.go:172] (0xc00072a370) Reply frame received for 1\nI0209 11:01:53.221712 1011 log.go:172] (0xc00072a370) (0xc000662c80) Create stream\nI0209 11:01:53.221717 1011 log.go:172] (0xc00072a370) (0xc000662c80) Stream added, broadcasting: 3\nI0209 11:01:53.222955 1011 log.go:172] (0xc00072a370) Reply frame received for 3\nI0209 11:01:53.222993 1011 log.go:172] (0xc00072a370) (0xc000566000) Create stream\nI0209 11:01:53.223017 1011 log.go:172] (0xc00072a370) (0xc000566000) Stream added, broadcasting: 5\nI0209 11:01:53.223945 1011 log.go:172] (0xc00072a370) Reply frame received for 5\nI0209 11:01:53.403320 1011 log.go:172] (0xc00072a370) Data frame received for 3\nI0209 11:01:53.403387 1011 log.go:172] (0xc000662c80) (3) Data frame handling\nI0209 11:01:53.403414 1011 log.go:172] (0xc000662c80) (3) Data frame sent\nI0209 11:01:53.590487 1011 log.go:172] (0xc00072a370) Data frame received for 1\nI0209 11:01:53.590689 1011 log.go:172] (0xc00072a370) (0xc000566000) Stream removed, broadcasting: 5\nI0209 11:01:53.590754 1011 log.go:172] (0xc0007ac640) (1) Data frame handling\nI0209 11:01:53.590794 1011 log.go:172] (0xc0007ac640) (1) Data frame sent\nI0209 11:01:53.590816 1011 log.go:172] (0xc00072a370) (0xc000662c80) Stream removed, broadcasting: 3\nI0209 11:01:53.590920 1011 log.go:172] (0xc00072a370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0209 11:01:53.590947 1011 log.go:172] (0xc00072a370) Go away received\nI0209 11:01:53.591874 1011 log.go:172] (0xc00072a370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0209 11:01:53.591908 1011 log.go:172] (0xc00072a370) (0xc000662c80) Stream removed, broadcasting: 3\nI0209 11:01:53.591925 1011 log.go:172] (0xc00072a370) (0xc000566000) Stream removed, broadcasting: 5\n" Feb 9 11:01:53.602: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:01:53.602: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:01:53.602: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 9 11:02:23.721: INFO: Deleting all statefulset in ns e2e-tests-statefulset-89brn Feb 9 11:02:23.736: INFO: Scaling statefulset ss to 0 Feb 9 11:02:23.758: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:02:23.764: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:02:23.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-89brn" for this suite. Feb 9 11:02:31.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:02:31.953: INFO: namespace: e2e-tests-statefulset-89brn, resource: bindings, ignored listing per whitelist Feb 9 11:02:32.125: INFO: namespace e2e-tests-statefulset-89brn deletion completed in 8.291900848s • [SLOW TEST:126.378 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:02:32.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 9 11:02:32.413: INFO: Waiting up to 5m0s for pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005" in namespace "e2e-tests-var-expansion-dv7gn" to be "success or failure" Feb 9 11:02:32.425: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.60765ms Feb 9 11:02:34.493: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08003273s Feb 9 11:02:36.529: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116305329s Feb 9 11:02:38.883: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470441808s Feb 9 11:02:40.905: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492006952s Feb 9 11:02:44.029: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.616575686s Feb 9 11:02:46.048: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.634913911s STEP: Saw pod success Feb 9 11:02:46.048: INFO: Pod "var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:02:46.059: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 11:02:46.950: INFO: Waiting for pod var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005 to disappear Feb 9 11:02:47.034: INFO: Pod var-expansion-abaaa6fa-4b2b-11ea-aa78-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:02:47.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-dv7gn" for this suite. Feb 9 11:02:53.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:02:53.160: INFO: namespace: e2e-tests-var-expansion-dv7gn, resource: bindings, ignored listing per whitelist Feb 9 11:02:53.269: INFO: namespace e2e-tests-var-expansion-dv7gn deletion completed in 6.212249183s • [SLOW TEST:21.143 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:02:53.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 9 11:02:53.479: INFO: Waiting up to 5m0s for pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-9hxgk" to be "success or failure" Feb 9 11:02:53.491: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.720136ms Feb 9 11:02:55.523: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043804513s Feb 9 11:02:57.539: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059285034s Feb 9 11:02:59.555: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075512982s Feb 9 11:03:01.832: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352787708s Feb 9 11:03:03.867: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388138763s Feb 9 11:03:05.903: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.423940512s STEP: Saw pod success Feb 9 11:03:05.904: INFO: Pod "pod-b839debb-4b2b-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:03:05.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b839debb-4b2b-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:03:06.109: INFO: Waiting for pod pod-b839debb-4b2b-11ea-aa78-0242ac110005 to disappear Feb 9 11:03:06.228: INFO: Pod pod-b839debb-4b2b-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:03:06.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9hxgk" for this suite. Feb 9 11:03:12.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:03:12.418: INFO: namespace: e2e-tests-emptydir-9hxgk, resource: bindings, ignored listing per whitelist Feb 9 11:03:12.465: INFO: namespace e2e-tests-emptydir-9hxgk deletion completed in 6.220222526s • [SLOW TEST:19.196 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:03:12.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:03:23.127: INFO: Waiting up to 5m0s for pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005" in namespace "e2e-tests-pods-dmxh7" to be "success or failure" Feb 9 11:03:23.144: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.557917ms Feb 9 11:03:25.157: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029749338s Feb 9 11:03:27.180: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052284151s Feb 9 11:03:29.192: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064173746s Feb 9 11:03:31.208: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08097807s Feb 9 11:03:33.225: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097880541s STEP: Saw pod success Feb 9 11:03:33.226: INFO: Pod "client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:03:33.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005 container env3cont: STEP: delete the pod Feb 9 11:03:34.112: INFO: Waiting for pod client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005 to disappear Feb 9 11:03:34.319: INFO: Pod client-envvars-c9d4a6c5-4b2b-11ea-aa78-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:03:34.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dmxh7" for this suite. Feb 9 11:04:16.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:04:16.618: INFO: namespace: e2e-tests-pods-dmxh7, resource: bindings, ignored listing per whitelist Feb 9 11:04:16.643: INFO: namespace e2e-tests-pods-dmxh7 deletion completed in 42.255990012s • [SLOW TEST:64.176 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:04:16.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 9 11:04:17.010: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f7mhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-f7mhz/configmaps/e2e-watch-test-resource-version,UID:e9fa5abe-4b2b-11ea-a994-fa163e34d433,ResourceVersion:21076258,Generation:0,CreationTimestamp:2020-02-09 11:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 9 11:04:17.011: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-f7mhz,SelfLink:/api/v1/namespaces/e2e-tests-watch-f7mhz/configmaps/e2e-watch-test-resource-version,UID:e9fa5abe-4b2b-11ea-a994-fa163e34d433,ResourceVersion:21076259,Generation:0,CreationTimestamp:2020-02-09 11:04:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:04:17.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-f7mhz" for this suite. Feb 9 11:04:23.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:04:23.212: INFO: namespace: e2e-tests-watch-f7mhz, resource: bindings, ignored listing per whitelist Feb 9 11:04:23.267: INFO: namespace e2e-tests-watch-f7mhz deletion completed in 6.203015319s • [SLOW TEST:6.625 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:04:23.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:04:23.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-mt2pr" to be "success or failure" Feb 9 11:04:23.548: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.933634ms Feb 9 11:04:25.982: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469340109s Feb 9 11:04:28.001: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488179291s Feb 9 11:04:30.109: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596166492s Feb 9 11:04:32.135: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622371321s Feb 9 11:04:34.194: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.680749883s STEP: Saw pod success Feb 9 11:04:34.194: INFO: Pod "downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:04:34.200: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:04:34.279: INFO: Waiting for pod downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005 to disappear Feb 9 11:04:34.355: INFO: Pod downwardapi-volume-ede36d98-4b2b-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:04:34.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mt2pr" for this suite. Feb 9 11:04:40.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:04:40.632: INFO: namespace: e2e-tests-downward-api-mt2pr, resource: bindings, ignored listing per whitelist Feb 9 11:04:40.724: INFO: namespace e2e-tests-downward-api-mt2pr deletion completed in 6.346696778s • [SLOW TEST:17.456 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:04:40.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:04:41.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-czqjh" to be "success or failure" Feb 9 11:04:41.130: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 92.291746ms Feb 9 11:04:43.315: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277090068s Feb 9 11:04:45.329: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291449603s Feb 9 11:04:47.629: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591069782s Feb 9 11:04:49.647: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608671767s Feb 9 11:04:51.677: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.638641303s STEP: Saw pod success Feb 9 11:04:51.677: INFO: Pod "downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:04:51.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:04:51.931: INFO: Waiting for pod downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005 to disappear Feb 9 11:04:51.974: INFO: Pod downwardapi-volume-f852a9c4-4b2b-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:04:51.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-czqjh" for this suite. Feb 9 11:04:58.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:04:58.673: INFO: namespace: e2e-tests-downward-api-czqjh, resource: bindings, ignored listing per whitelist Feb 9 11:04:58.696: INFO: namespace e2e-tests-downward-api-czqjh deletion completed in 6.711072698s • [SLOW TEST:17.972 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:04:58.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-030d5fe6-4b2c-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:04:59.043: INFO: Waiting up to 5m0s for pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-86znt" to be "success or failure" Feb 9 11:04:59.088: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.142002ms Feb 9 11:05:01.100: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057142802s Feb 9 11:05:03.110: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066741463s Feb 9 11:05:05.364: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321341804s Feb 9 11:05:07.377: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333718585s Feb 9 11:05:09.394: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.350936118s STEP: Saw pod success Feb 9 11:05:09.394: INFO: Pod "pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:05:09.398: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 9 11:05:09.547: INFO: Waiting for pod pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005 to disappear Feb 9 11:05:09.577: INFO: Pod pod-configmaps-030e8399-4b2c-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:05:09.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-86znt" for this suite. Feb 9 11:05:17.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:05:17.265: INFO: namespace: e2e-tests-configmap-86znt, resource: bindings, ignored listing per whitelist Feb 9 11:05:17.374: INFO: namespace e2e-tests-configmap-86znt deletion completed in 7.789387752s • [SLOW TEST:18.677 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:05:17.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 9 11:05:17.589: INFO: Waiting up to 5m0s for pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-kwwsv" to be "success or failure" Feb 9 11:05:17.624: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.148873ms Feb 9 11:05:19.640: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049890461s Feb 9 11:05:21.668: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077801952s Feb 9 11:05:24.214: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623825686s Feb 9 11:05:26.228: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.638053003s Feb 9 11:05:28.245: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.655023504s STEP: Saw pod success Feb 9 11:05:28.245: INFO: Pod "pod-0e151c73-4b2c-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:05:28.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0e151c73-4b2c-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:05:29.008: INFO: Waiting for pod pod-0e151c73-4b2c-11ea-aa78-0242ac110005 to disappear Feb 9 11:05:29.044: INFO: Pod pod-0e151c73-4b2c-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:05:29.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kwwsv" for this suite. Feb 9 11:05:35.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:05:35.267: INFO: namespace: e2e-tests-emptydir-kwwsv, resource: bindings, ignored listing per whitelist Feb 9 11:05:35.401: INFO: namespace e2e-tests-emptydir-kwwsv deletion completed in 6.343572672s • [SLOW TEST:18.027 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:05:35.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0209 11:06:18.101947 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 9 11:06:18.102: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:06:18.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2w5mc" for this suite. Feb 9 11:06:46.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:06:46.681: INFO: namespace: e2e-tests-gc-2w5mc, resource: bindings, ignored listing per whitelist Feb 9 11:06:46.767: INFO: namespace e2e-tests-gc-2w5mc deletion completed in 28.460809352s • [SLOW TEST:71.365 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:06:46.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-43731eaa-4b2c-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 11:06:47.152: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-xt8db" to be "success or failure" Feb 9 11:06:47.180: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.414675ms Feb 9 11:06:49.455: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302294827s Feb 9 11:06:51.467: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314781425s Feb 9 11:06:54.030: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877506584s Feb 9 11:06:56.159: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.006691868s Feb 9 11:06:58.209: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.05639142s Feb 9 11:07:00.242: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.090103979s STEP: Saw pod success Feb 9 11:07:00.243: INFO: Pod "pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:07:00.255: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 9 11:07:00.415: INFO: Waiting for pod pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005 to disappear Feb 9 11:07:01.104: INFO: Pod pod-projected-secrets-438073a7-4b2c-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:07:01.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xt8db" for this suite. Feb 9 11:07:07.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:07:08.091: INFO: namespace: e2e-tests-projected-xt8db, resource: bindings, ignored listing per whitelist Feb 9 11:07:08.119: INFO: namespace e2e-tests-projected-xt8db deletion completed in 6.962113254s • [SLOW TEST:21.351 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:07:08.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 9 11:07:08.337: INFO: Waiting up to 5m0s for pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-wsls4" to be "success or failure" Feb 9 11:07:08.347: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.56647ms Feb 9 11:07:10.378: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041302053s Feb 9 11:07:12.400: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063613803s Feb 9 11:07:14.487: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150737872s Feb 9 11:07:17.052: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715076724s Feb 9 11:07:19.070: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.7329164s STEP: Saw pod success Feb 9 11:07:19.070: INFO: Pod "downward-api-50217289-4b2c-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:07:19.076: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-50217289-4b2c-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 11:07:19.726: INFO: Waiting for pod downward-api-50217289-4b2c-11ea-aa78-0242ac110005 to disappear Feb 9 11:07:19.758: INFO: Pod downward-api-50217289-4b2c-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:07:19.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wsls4" for this suite. Feb 9 11:07:25.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:07:26.040: INFO: namespace: e2e-tests-downward-api-wsls4, resource: bindings, ignored listing per whitelist Feb 9 11:07:26.100: INFO: namespace e2e-tests-downward-api-wsls4 deletion completed in 6.331534086s • [SLOW TEST:17.981 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:07:26.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hb6xz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 3.247.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.247.3_udp@PTR;check="$$(dig +tcp +noall +answer +search 3.247.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.247.3_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-hb6xz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-hb6xz.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hb6xz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 3.247.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.247.3_udp@PTR;check="$$(dig +tcp +noall +answer +search 3.247.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.247.3_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 9 11:07:40.766: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.771: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.776: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.784: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.790: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.796: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.799: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.811: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.817: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.824: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.829: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.835: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.839: INFO: Unable to read 10.103.247.3_udp@PTR from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.843: INFO: Unable to read 10.103.247.3_tcp@PTR from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.847: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.852: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.856: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-hb6xz from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.861: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.867: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.872: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.877: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.883: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.887: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.892: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.897: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.902: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.906: INFO: Unable to read 10.103.247.3_udp@PTR from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.915: INFO: Unable to read 10.103.247.3_tcp@PTR from pod e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005) Feb 9 11:07:40.915: INFO: Lookups using e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz wheezy_udp@dns-test-service.e2e-tests-dns-hb6xz.svc wheezy_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.103.247.3_udp@PTR 10.103.247.3_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-hb6xz jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz jessie_udp@dns-test-service.e2e-tests-dns-hb6xz.svc jessie_tcp@dns-test-service.e2e-tests-dns-hb6xz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-hb6xz.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-hb6xz.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.103.247.3_udp@PTR 10.103.247.3_tcp@PTR] Feb 9 11:07:46.053: INFO: DNS probes using e2e-tests-dns-hb6xz/dns-test-5af3032c-4b2c-11ea-aa78-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:07:46.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-hb6xz" for this suite. Feb 9 11:07:52.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:07:52.762: INFO: namespace: e2e-tests-dns-hb6xz, resource: bindings, ignored listing per whitelist Feb 9 11:07:52.860: INFO: namespace e2e-tests-dns-hb6xz deletion completed in 6.283043838s • [SLOW TEST:26.760 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:07:52.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 9 11:08:03.625: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:08:54.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-xp5dp" for this suite. Feb 9 11:09:00.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:09:00.897: INFO: namespace: e2e-tests-namespaces-xp5dp, resource: bindings, ignored listing per whitelist Feb 9 11:09:00.923: INFO: namespace e2e-tests-namespaces-xp5dp deletion completed in 6.857360337s STEP: Destroying namespace "e2e-tests-nsdeletetest-b6474" for this suite. Feb 9 11:09:00.926: INFO: Namespace e2e-tests-nsdeletetest-b6474 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-sxnpl" for this suite. Feb 9 11:09:06.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:09:07.009: INFO: namespace: e2e-tests-nsdeletetest-sxnpl, resource: bindings, ignored listing per whitelist Feb 9 11:09:07.112: INFO: namespace e2e-tests-nsdeletetest-sxnpl deletion completed in 6.186452845s • [SLOW TEST:74.251 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:09:07.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 9 11:09:07.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-sv884' Feb 9 11:09:07.503: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 9 11:09:07.503: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 9 11:09:11.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-sv884' Feb 9 11:09:11.912: INFO: stderr: "" Feb 9 11:09:11.912: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:09:11.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sv884" for this suite. Feb 9 11:09:18.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:09:18.059: INFO: namespace: e2e-tests-kubectl-sv884, resource: bindings, ignored listing per whitelist Feb 9 11:09:18.215: INFO: namespace e2e-tests-kubectl-sv884 deletion completed in 6.254322481s • [SLOW TEST:11.101 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:09:18.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 9 11:09:18.729: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix005923521/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:09:18.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-47hrr" for this suite. Feb 9 11:09:24.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:09:25.062: INFO: namespace: e2e-tests-kubectl-47hrr, resource: bindings, ignored listing per whitelist Feb 9 11:09:25.095: INFO: namespace e2e-tests-kubectl-47hrr deletion completed in 6.200013039s • [SLOW TEST:6.878 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:09:25.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xn6tp [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-xn6tp STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-xn6tp STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-xn6tp STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-xn6tp Feb 9 11:09:41.469: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xn6tp, name: ss-0, uid: ab11fcf6-4b2c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 9 11:09:41.482: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xn6tp, name: ss-0, uid: ab11fcf6-4b2c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 9 11:09:41.503: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-xn6tp STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-xn6tp STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-xn6tp and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 9 11:09:56.805: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xn6tp Feb 9 11:09:56.812: INFO: Scaling statefulset ss to 0 Feb 9 11:10:06.884: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:10:06.892: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:10:06.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xn6tp" for this suite. Feb 9 11:10:15.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:10:15.069: INFO: namespace: e2e-tests-statefulset-xn6tp, resource: bindings, ignored listing per whitelist Feb 9 11:10:15.164: INFO: namespace e2e-tests-statefulset-xn6tp deletion completed in 8.186917701s • [SLOW TEST:50.068 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:10:15.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:10:15.371: INFO: Creating deployment "nginx-deployment" Feb 9 11:10:15.380: INFO: Waiting for observed generation 1 Feb 9 11:10:20.147: INFO: Waiting for all required pods to come up Feb 9 11:10:20.233: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 9 11:10:57.043: INFO: Waiting for deployment "nginx-deployment" to complete Feb 9 11:10:57.053: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 9 11:10:57.065: INFO: Updating deployment nginx-deployment Feb 9 11:10:57.065: INFO: Waiting for observed generation 2 Feb 9 11:11:01.274: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 9 11:11:01.530: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 9 11:11:01.535: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 9 11:11:01.684: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 9 11:11:01.684: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 9 11:11:01.696: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 9 11:11:01.715: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 9 11:11:01.715: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 9 11:11:01.738: INFO: Updating deployment nginx-deployment Feb 9 11:11:01.738: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 9 11:11:01.789: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 9 11:11:07.084: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 9 11:11:07.662: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mlpwg/deployments/nginx-deployment,UID:bf9eeb85-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077539,Generation:3,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-09 11:10:58 +0000 UTC 2020-02-09 11:10:15 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-09 11:11:03 +0000 UTC 2020-02-09 11:11:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 9 11:11:08.128: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mlpwg/replicasets/nginx-deployment-5c98f8fb5,UID:d8794935-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077535,Generation:3,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bf9eeb85-4b2c-11ea-a994-fa163e34d433 0xc001a68ff7 0xc001a68ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 9 11:11:08.128: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 9 11:11:08.128: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mlpwg/replicasets/nginx-deployment-85ddf47c5d,UID:bfa28390-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077576,Generation:3,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment bf9eeb85-4b2c-11ea-a994-fa163e34d433 0xc001a690b7 0xc001a690b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 9 11:11:08.514: INFO: Pod "nginx-deployment-5c98f8fb5-47dpt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-47dpt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-47dpt,UID:d880d346-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077519,Generation:0,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc001a69b70 0xc001a69b71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a69be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a69c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:10:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.514: INFO: Pod "nginx-deployment-5c98f8fb5-5ptdr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5ptdr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-5ptdr,UID:ded1b15d-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077582,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc001a69cc7 0xc001a69cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a69ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a69ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.515: INFO: Pod "nginx-deployment-5c98f8fb5-6qq4f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6qq4f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-6qq4f,UID:ded18d74-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077583,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc001a69f77 0xc001a69f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a69fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000341830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.515: INFO: Pod "nginx-deployment-5c98f8fb5-7jrv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7jrv5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-7jrv5,UID:d87e8397-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077497,Generation:0,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00059e107 0xc00059e108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00059f6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00059fcd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:10:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.516: INFO: Pod "nginx-deployment-5c98f8fb5-7p8jl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7p8jl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-7p8jl,UID:d8ceceae-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077522,Generation:0,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc0002ec6b7 0xc0002ec6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0002ed530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0002ed5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:10:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.516: INFO: Pod "nginx-deployment-5c98f8fb5-jbj4b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jbj4b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-jbj4b,UID:df08f5f3-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077590,Generation:0,CreationTimestamp:2020-02-09 11:11:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc0002ed7b7 0xc0002ed7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0002ed8d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0002ed940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.517: INFO: Pod "nginx-deployment-5c98f8fb5-lplcd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lplcd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-lplcd,UID:d8dbfa80-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077525,Generation:0,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc0002edac7 0xc0002edac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0002edc40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0002edf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:10:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.517: INFO: Pod "nginx-deployment-5c98f8fb5-r445c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r445c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-r445c,UID:deabce83-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077563,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192c117 0xc00192c118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192c370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192c390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.517: INFO: Pod "nginx-deployment-5c98f8fb5-rvsqb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rvsqb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-rvsqb,UID:d8811771-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077521,Generation:0,CreationTimestamp:2020-02-09 11:10:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192c567 0xc00192c568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192c5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192c610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:10:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.518: INFO: Pod "nginx-deployment-5c98f8fb5-sdbss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sdbss,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-sdbss,UID:ded16738-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077584,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192c7f7 0xc00192c7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192c860} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192d0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.518: INFO: Pod "nginx-deployment-5c98f8fb5-trp6w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-trp6w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-trp6w,UID:dea6f22a-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077558,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192d147 0xc00192d148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192d630} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.518: INFO: Pod "nginx-deployment-5c98f8fb5-v59kt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v59kt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-v59kt,UID:deabbf96-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077569,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192d9f7 0xc00192d9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192da60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192da80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.518: INFO: Pod "nginx-deployment-5c98f8fb5-wnv57" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wnv57,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-5c98f8fb5-wnv57,UID:ded19950-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077592,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d8794935-4b2c-11ea-a994-fa163e34d433 0xc00192daf7 0xc00192daf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192dba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192dbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.519: INFO: Pod "nginx-deployment-85ddf47c5d-2dghf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2dghf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-2dghf,UID:bfdb4694-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077450,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00192dc37 0xc00192dc38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192dca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00192dd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-09 11:10:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6c8ad6b84cc82c416c581a7247493377498da70464f6c5d88bbe76cd8d4f0d84}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.519: INFO: Pod "nginx-deployment-85ddf47c5d-5496p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5496p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-5496p,UID:bfb57474-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077458,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00192de97 0xc00192de98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00192dfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b546c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-09 11:10:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://aedeae0172025af619dee41ef8f553662d8574608f1c9681e41381cdb5418352}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.519: INFO: Pod "nginx-deployment-85ddf47c5d-5xhsp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5xhsp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-5xhsp,UID:dea7f680-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077559,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc000b54807 0xc000b54808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b54890} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b548b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.520: INFO: Pod "nginx-deployment-85ddf47c5d-bjfwx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bjfwx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-bjfwx,UID:dea7c2e6-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077557,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc000b54957 0xc000b54958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000058670} {node.kubernetes.io/unreachable Exists NoExecute 0xc000058810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.520: INFO: Pod "nginx-deployment-85ddf47c5d-btjj4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-btjj4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-btjj4,UID:bfbb7b60-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077446,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc000059257 0xc000059258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000059ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00042e2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-09 11:10:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7989d27ecb728bb62ab7bf41e7c53147e996e02355dc8a64df9b84aa698c0863}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.520: INFO: Pod "nginx-deployment-85ddf47c5d-cdzcj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cdzcj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-cdzcj,UID:dea7e251-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077560,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00042e997 0xc00042e998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00042f6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00042f6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.520: INFO: Pod "nginx-deployment-85ddf47c5d-dkfsm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dkfsm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-dkfsm,UID:de8caaf4-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077547,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b4507 0xc0004b4508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b45f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b4680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.521: INFO: Pod "nginx-deployment-85ddf47c5d-dmkb2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dmkb2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-dmkb2,UID:bfbd56d7-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077463,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b48a7 0xc0004b48a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b4ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b4c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-09 11:10:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e38b99874ba2d3e8203bc8f994bfe2766b02670418d4a529544f7d37ed3167f3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.521: INFO: Pod "nginx-deployment-85ddf47c5d-dvvsq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dvvsq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-dvvsq,UID:bfbbf5c4-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077431,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b5037 0xc0004b5038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b5170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b51b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-09 11:10:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b339d198c97942bdc57ab1bf30932337472be2b2a2ac582d0706dda3149c67dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.521: INFO: Pod "nginx-deployment-85ddf47c5d-fpb6d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fpb6d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-fpb6d,UID:deccabe9-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077578,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b5527 0xc0004b5528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b5780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b57e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.521: INFO: Pod "nginx-deployment-85ddf47c5d-hj6f8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hj6f8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-hj6f8,UID:bfdc24e1-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077453,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b58b7 0xc0004b58b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b5940} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b59a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-09 11:10:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://78121d52fa40bd75f082062b201577db3a3c5975871b1b19473f647cdd01beb1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.522: INFO: Pod "nginx-deployment-85ddf47c5d-jthj7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jthj7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-jthj7,UID:de79ed87-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077579,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b5b47 0xc0004b5b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b5d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b5d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 11:11:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.522: INFO: Pod "nginx-deployment-85ddf47c5d-njgq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-njgq7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-njgq7,UID:deccb6c3-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077580,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc0004b5fc7 0xc0004b5fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161a050} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161a080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.522: INFO: Pod "nginx-deployment-85ddf47c5d-vqw2q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vqw2q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-vqw2q,UID:decc5659-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077586,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161a127 0xc00161a128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161a190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161a1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.522: INFO: Pod "nginx-deployment-85ddf47c5d-w428j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w428j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-w428j,UID:deccc082-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077585,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161a247 0xc00161a248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161a2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161a320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.523: INFO: Pod "nginx-deployment-85ddf47c5d-x2f2q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x2f2q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-x2f2q,UID:dea7eff5-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077561,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161a507 0xc00161a508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161a730} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161a750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.523: INFO: Pod "nginx-deployment-85ddf47c5d-xfr7s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfr7s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-xfr7s,UID:decc7cc5-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077581,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161a807 0xc00161a808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161a930} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161a9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.523: INFO: Pod "nginx-deployment-85ddf47c5d-xndr6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xndr6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-xndr6,UID:de8cc890-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077553,Generation:0,CreationTimestamp:2020-02-09 11:11:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161aa67 0xc00161aa68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161aae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161ab00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.524: INFO: Pod "nginx-deployment-85ddf47c5d-zk4hj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zk4hj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-zk4hj,UID:bfb57e7a-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077466,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161ab87 0xc00161ab88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161ac20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161ac40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-09 11:10:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a6e9815d13a2bc693005af7083db26a2795b9e71010ef19867641c42e979950f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 9 11:11:08.524: INFO: Pod "nginx-deployment-85ddf47c5d-ztd6z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ztd6z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mlpwg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mlpwg/pods/nginx-deployment-85ddf47c5d-ztd6z,UID:bfdabf33-4b2c-11ea-a994-fa163e34d433,ResourceVersion:21077440,Generation:0,CreationTimestamp:2020-02-09 11:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d bfa28390-4b2c-11ea-a994-fa163e34d433 0xc00161ad27 0xc00161ad28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dxswk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dxswk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dxswk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00161ad90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00161adb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:10:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-09 11:10:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 11:10:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3130814c8642d554074f93b05505aa860a7a4eeccf4a53e333b9b5fa8d2d4e82}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:11:08.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mlpwg" for this suite. Feb 9 11:12:18.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:12:19.029: INFO: namespace: e2e-tests-deployment-mlpwg, resource: bindings, ignored listing per whitelist Feb 9 11:12:20.918: INFO: namespace e2e-tests-deployment-mlpwg deletion completed in 1m12.284802448s • [SLOW TEST:125.754 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:12:20.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-0c78c041-4b2d-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:12:24.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-fccqm" to be "success or failure" Feb 9 11:12:24.622: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.273979ms Feb 9 11:12:26.853: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322895197s Feb 9 11:12:29.069: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537983268s Feb 9 11:12:31.184: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65347897s Feb 9 11:12:33.222: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.69119742s Feb 9 11:12:35.246: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.715078986s Feb 9 11:12:37.256: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725699602s Feb 9 11:12:39.287: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.756047327s Feb 9 11:12:43.700: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.16933386s Feb 9 11:12:45.718: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.187923842s Feb 9 11:12:47.737: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.206447161s Feb 9 11:12:49.759: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.228175645s Feb 9 11:12:51.967: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.436347215s Feb 9 11:12:54.281: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.750454479s Feb 9 11:12:56.300: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.769583868s STEP: Saw pod success Feb 9 11:12:56.300: INFO: Pod "pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:12:56.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 9 11:12:57.454: INFO: Waiting for pod pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005 to disappear Feb 9 11:12:58.064: INFO: Pod pod-configmaps-0c929e58-4b2d-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:12:58.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fccqm" for this suite. Feb 9 11:13:04.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:13:04.311: INFO: namespace: e2e-tests-configmap-fccqm, resource: bindings, ignored listing per whitelist Feb 9 11:13:04.315: INFO: namespace e2e-tests-configmap-fccqm deletion completed in 6.227754531s • [SLOW TEST:43.396 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:13:04.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fhfk9 Feb 9 11:13:14.714: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fhfk9 STEP: checking the pod's current state and verifying that restartCount is present Feb 9 11:13:14.727: INFO: Initial restart count of pod liveness-exec is 0 Feb 9 11:14:11.336: INFO: Restart count of pod e2e-tests-container-probe-fhfk9/liveness-exec is now 1 (56.609214118s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:14:11.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fhfk9" for this suite. Feb 9 11:14:17.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:14:17.586: INFO: namespace: e2e-tests-container-probe-fhfk9, resource: bindings, ignored listing per whitelist Feb 9 11:14:17.635: INFO: namespace e2e-tests-container-probe-fhfk9 deletion completed in 6.183916644s • [SLOW TEST:73.320 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:14:17.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 9 11:14:28.457: INFO: Successfully updated pod "annotationupdate501cb7c5-4b2d-11ea-aa78-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:14:30.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hxzlh" for this suite. Feb 9 11:15:02.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:15:02.932: INFO: namespace: e2e-tests-downward-api-hxzlh, resource: bindings, ignored listing per whitelist Feb 9 11:15:02.932: INFO: namespace e2e-tests-downward-api-hxzlh deletion completed in 32.286389096s • [SLOW TEST:45.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:15:02.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 9 11:15:03.161: INFO: Waiting up to 5m0s for pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-p5pvg" to be "success or failure" Feb 9 11:15:03.186: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.007591ms Feb 9 11:15:05.224: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063023265s Feb 9 11:15:07.297: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135437752s Feb 9 11:15:09.378: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216291249s Feb 9 11:15:11.433: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271456046s Feb 9 11:15:13.448: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.286800103s Feb 9 11:15:15.464: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.302881412s STEP: Saw pod success Feb 9 11:15:15.464: INFO: Pod "pod-6b2674da-4b2d-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:15:15.469: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6b2674da-4b2d-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:15:16.637: INFO: Waiting for pod pod-6b2674da-4b2d-11ea-aa78-0242ac110005 to disappear Feb 9 11:15:16.741: INFO: Pod pod-6b2674da-4b2d-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:15:16.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p5pvg" for this suite. Feb 9 11:15:22.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:15:22.973: INFO: namespace: e2e-tests-emptydir-p5pvg, resource: bindings, ignored listing per whitelist Feb 9 11:15:22.976: INFO: namespace e2e-tests-emptydir-p5pvg deletion completed in 6.222187904s • [SLOW TEST:20.044 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:15:22.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-77178982-4b2d-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:15:23.199: INFO: Waiting up to 5m0s for pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-jqwc4" to be "success or failure" Feb 9 11:15:23.225: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.509349ms Feb 9 11:15:25.692: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493280049s Feb 9 11:15:27.714: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514493464s Feb 9 11:15:29.726: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527154016s Feb 9 11:15:32.563: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.363901352s Feb 9 11:15:34.605: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.405950243s STEP: Saw pod success Feb 9 11:15:34.605: INFO: Pod "pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:15:34.622: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 9 11:15:35.429: INFO: Waiting for pod pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005 to disappear Feb 9 11:15:35.552: INFO: Pod pod-configmaps-77185533-4b2d-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:15:35.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jqwc4" for this suite. Feb 9 11:15:41.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:15:41.743: INFO: namespace: e2e-tests-configmap-jqwc4, resource: bindings, ignored listing per whitelist Feb 9 11:15:41.950: INFO: namespace e2e-tests-configmap-jqwc4 deletion completed in 6.379351014s • [SLOW TEST:18.974 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:15:41.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 9 11:15:52.850: INFO: Successfully updated pod "pod-update-activedeadlineseconds-82661c7b-4b2d-11ea-aa78-0242ac110005" Feb 9 11:15:52.851: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-82661c7b-4b2d-11ea-aa78-0242ac110005" in namespace "e2e-tests-pods-vqj58" to be "terminated due to deadline exceeded" Feb 9 11:15:52.863: INFO: Pod "pod-update-activedeadlineseconds-82661c7b-4b2d-11ea-aa78-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.815864ms Feb 9 11:15:55.016: INFO: Pod "pod-update-activedeadlineseconds-82661c7b-4b2d-11ea-aa78-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.165581469s Feb 9 11:15:55.017: INFO: Pod "pod-update-activedeadlineseconds-82661c7b-4b2d-11ea-aa78-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:15:55.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vqj58" for this suite. Feb 9 11:16:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:16:01.152: INFO: namespace: e2e-tests-pods-vqj58, resource: bindings, ignored listing per whitelist Feb 9 11:16:01.431: INFO: namespace e2e-tests-pods-vqj58 deletion completed in 6.401866386s • [SLOW TEST:19.480 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:16:01.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 9 11:16:01.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8swkp' Feb 9 11:16:04.522: INFO: stderr: "" Feb 9 11:16:04.522: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 9 11:16:05.541: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:05.541: INFO: Found 0 / 1 Feb 9 11:16:06.616: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:06.617: INFO: Found 0 / 1 Feb 9 11:16:07.535: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:07.535: INFO: Found 0 / 1 Feb 9 11:16:08.554: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:08.554: INFO: Found 0 / 1 Feb 9 11:16:09.535: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:09.535: INFO: Found 0 / 1 Feb 9 11:16:10.564: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:10.565: INFO: Found 0 / 1 Feb 9 11:16:11.832: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:11.832: INFO: Found 0 / 1 Feb 9 11:16:12.560: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:12.560: INFO: Found 0 / 1 Feb 9 11:16:13.536: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:13.536: INFO: Found 0 / 1 Feb 9 11:16:14.579: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:14.579: INFO: Found 1 / 1 Feb 9 11:16:14.580: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 9 11:16:14.612: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:16:14.612: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 9 11:16:14.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp' Feb 9 11:16:14.809: INFO: stderr: "" Feb 9 11:16:14.809: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 Feb 11:16:12.438 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 11:16:12.438 # Server started, Redis version 3.2.12\n1:M 09 Feb 11:16:12.439 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 11:16:12.439 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 9 11:16:14.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp --tail=1' Feb 9 11:16:15.067: INFO: stderr: "" Feb 9 11:16:15.068: INFO: stdout: "1:M 09 Feb 11:16:12.439 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 9 11:16:15.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp --limit-bytes=1' Feb 9 11:16:15.230: INFO: stderr: "" Feb 9 11:16:15.230: INFO: stdout: " " STEP: exposing timestamps Feb 9 11:16:15.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp --tail=1 --timestamps' Feb 9 11:16:15.375: INFO: stderr: "" Feb 9 11:16:15.376: INFO: stdout: "2020-02-09T11:16:12.441145477Z 1:M 09 Feb 11:16:12.439 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 9 11:16:17.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp --since=1s' Feb 9 11:16:18.107: INFO: stderr: "" Feb 9 11:16:18.107: INFO: stdout: "" Feb 9 11:16:18.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-7nmh4 redis-master --namespace=e2e-tests-kubectl-8swkp --since=24h' Feb 9 11:16:18.282: INFO: stderr: "" Feb 9 11:16:18.282: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 09 Feb 11:16:12.438 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 11:16:12.438 # Server started, Redis version 3.2.12\n1:M 09 Feb 11:16:12.439 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 11:16:12.439 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 9 11:16:18.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8swkp' Feb 9 11:16:18.412: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 11:16:18.412: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 9 11:16:18.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-8swkp' Feb 9 11:16:18.705: INFO: stderr: "No resources found.\n" Feb 9 11:16:18.706: INFO: stdout: "" Feb 9 11:16:18.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-8swkp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 9 11:16:19.013: INFO: stderr: "" Feb 9 11:16:19.014: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:16:19.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8swkp" for this suite. Feb 9 11:16:41.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:16:41.167: INFO: namespace: e2e-tests-kubectl-8swkp, resource: bindings, ignored listing per whitelist Feb 9 11:16:41.237: INFO: namespace e2e-tests-kubectl-8swkp deletion completed in 22.194060802s • [SLOW TEST:39.806 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:16:41.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0209 11:17:12.153160 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 9 11:17:12.153: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:17:12.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-v4fv9" for this suite. Feb 9 11:17:22.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:17:22.655: INFO: namespace: e2e-tests-gc-v4fv9, resource: bindings, ignored listing per whitelist Feb 9 11:17:22.807: INFO: namespace e2e-tests-gc-v4fv9 deletion completed in 10.590552245s • [SLOW TEST:41.570 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:17:22.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-w2ggd STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-w2ggd STEP: Deleting pre-stop pod Feb 9 11:17:49.168: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:17:49.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-w2ggd" for this suite. Feb 9 11:18:33.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:18:33.651: INFO: namespace: e2e-tests-prestop-w2ggd, resource: bindings, ignored listing per whitelist Feb 9 11:18:33.711: INFO: namespace e2e-tests-prestop-w2ggd deletion completed in 44.406630553s • [SLOW TEST:70.903 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:18:33.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 9 11:18:34.287: INFO: Waiting up to 5m0s for pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-jk775" to be "success or failure" Feb 9 11:18:34.313: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.296163ms Feb 9 11:18:36.706: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419281481s Feb 9 11:18:38.718: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430690878s Feb 9 11:18:40.772: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484707624s Feb 9 11:18:42.800: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512391422s Feb 9 11:18:44.809: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.521736203s Feb 9 11:18:47.225: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.937949373s STEP: Saw pod success Feb 9 11:18:47.225: INFO: Pod "pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:18:47.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:18:47.603: INFO: Waiting for pod pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005 to disappear Feb 9 11:18:47.618: INFO: Pod pod-e8e8d3b8-4b2d-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:18:47.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jk775" for this suite. Feb 9 11:18:55.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:18:55.854: INFO: namespace: e2e-tests-emptydir-jk775, resource: bindings, ignored listing per whitelist Feb 9 11:18:55.873: INFO: namespace e2e-tests-emptydir-jk775 deletion completed in 8.242979524s • [SLOW TEST:22.162 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:18:55.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f60078b0-4b2d-11ea-aa78-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-f6007b3f-4b2d-11ea-aa78-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f60078b0-4b2d-11ea-aa78-0242ac110005 STEP: Updating configmap cm-test-opt-upd-f6007b3f-4b2d-11ea-aa78-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-f6007ba2-4b2d-11ea-aa78-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:19:16.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fk7zx" for this suite. Feb 9 11:19:42.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:19:43.074: INFO: namespace: e2e-tests-configmap-fk7zx, resource: bindings, ignored listing per whitelist Feb 9 11:19:43.088: INFO: namespace e2e-tests-configmap-fk7zx deletion completed in 26.310784443s • [SLOW TEST:47.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:19:43.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-121ccec3-4b2e-11ea-aa78-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-121ccea5-4b2e-11ea-aa78-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 9 11:19:43.296: INFO: Waiting up to 5m0s for pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-wfzhn" to be "success or failure" Feb 9 11:19:43.382: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.837588ms Feb 9 11:19:45.463: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167051801s Feb 9 11:19:47.486: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189931398s Feb 9 11:19:49.495: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199256033s Feb 9 11:19:51.512: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216618667s Feb 9 11:19:53.527: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.231124413s Feb 9 11:19:55.810: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.513740516s STEP: Saw pod success Feb 9 11:19:55.810: INFO: Pod "projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:19:55.867: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005 container projected-all-volume-test: STEP: delete the pod Feb 9 11:19:56.029: INFO: Waiting for pod projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005 to disappear Feb 9 11:19:56.035: INFO: Pod projected-volume-121ccdaa-4b2e-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:19:56.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wfzhn" for this suite. Feb 9 11:20:02.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:20:02.343: INFO: namespace: e2e-tests-projected-wfzhn, resource: bindings, ignored listing per whitelist Feb 9 11:20:02.381: INFO: namespace e2e-tests-projected-wfzhn deletion completed in 6.320655674s • [SLOW TEST:19.292 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:20:02.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 9 11:20:15.524: INFO: Successfully updated pod "annotationupdate1db35edb-4b2e-11ea-aa78-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:20:17.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-68mrr" for this suite. Feb 9 11:20:41.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:20:41.896: INFO: namespace: e2e-tests-projected-68mrr, resource: bindings, ignored listing per whitelist Feb 9 11:20:41.905: INFO: namespace e2e-tests-projected-68mrr deletion completed in 24.29516185s • [SLOW TEST:39.524 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:20:41.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 9 11:20:42.142: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:21:01.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hzbkg" for this suite. Feb 9 11:21:09.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:21:09.304: INFO: namespace: e2e-tests-init-container-hzbkg, resource: bindings, ignored listing per whitelist Feb 9 11:21:09.410: INFO: namespace e2e-tests-init-container-hzbkg deletion completed in 8.212649961s • [SLOW TEST:27.504 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:21:09.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vlfmr in namespace e2e-tests-proxy-6k6m8 I0209 11:21:09.687395 8 runners.go:184] Created replication controller with name: proxy-service-vlfmr, namespace: e2e-tests-proxy-6k6m8, replica count: 1 I0209 11:21:10.738885 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:11.739849 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:12.740435 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:13.740814 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:14.741176 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:15.741677 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:16.742155 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:17.742840 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:18.743417 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:19.744535 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:20.745587 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 11:21:21.746674 8 runners.go:184] proxy-service-vlfmr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 9 11:21:21.775: INFO: setup took 12.254165237s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 9 11:21:21.816: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6k6m8/pods/proxy-service-vlfmr-9c78g:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 9 11:21:37.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 9 11:21:37.662: INFO: stderr: "" Feb 9 11:21:37.662: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:21:37.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gsdrm" for this suite. Feb 9 11:21:43.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:21:43.763: INFO: namespace: e2e-tests-kubectl-gsdrm, resource: bindings, ignored listing per whitelist Feb 9 11:21:43.944: INFO: namespace e2e-tests-kubectl-gsdrm deletion completed in 6.268698609s • [SLOW TEST:6.618 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:21:43.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-5a300a31-4b2e-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:21:44.292: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-p8xmd" to be "success or failure" Feb 9 11:21:44.302: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.339518ms Feb 9 11:21:46.372: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079905586s Feb 9 11:21:48.435: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143081848s Feb 9 11:21:50.614: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321950748s Feb 9 11:21:52.639: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346728878s Feb 9 11:21:54.727: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435142282s STEP: Saw pod success Feb 9 11:21:54.727: INFO: Pod "pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:21:54.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 9 11:21:55.105: INFO: Waiting for pod pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005 to disappear Feb 9 11:21:55.114: INFO: Pod pod-projected-configmaps-5a3c764e-4b2e-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:21:55.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p8xmd" for this suite. Feb 9 11:22:01.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:22:01.353: INFO: namespace: e2e-tests-projected-p8xmd, resource: bindings, ignored listing per whitelist Feb 9 11:22:01.387: INFO: namespace e2e-tests-projected-p8xmd deletion completed in 6.264882684s • [SLOW TEST:17.441 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:22:01.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-w5qmq Feb 9 11:22:11.659: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-w5qmq STEP: checking the pod's current state and verifying that restartCount is present Feb 9 11:22:11.664: INFO: Initial restart count of pod liveness-http is 0 Feb 9 11:22:32.065: INFO: Restart count of pod e2e-tests-container-probe-w5qmq/liveness-http is now 1 (20.400422864s elapsed) Feb 9 11:22:52.617: INFO: Restart count of pod e2e-tests-container-probe-w5qmq/liveness-http is now 2 (40.952427869s elapsed) Feb 9 11:23:12.975: INFO: Restart count of pod e2e-tests-container-probe-w5qmq/liveness-http is now 3 (1m1.310865166s elapsed) Feb 9 11:23:31.539: INFO: Restart count of pod e2e-tests-container-probe-w5qmq/liveness-http is now 4 (1m19.875147414s elapsed) Feb 9 11:24:40.396: INFO: Restart count of pod e2e-tests-container-probe-w5qmq/liveness-http is now 5 (2m28.73180839s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:24:40.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w5qmq" for this suite. Feb 9 11:24:46.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:24:46.794: INFO: namespace: e2e-tests-container-probe-w5qmq, resource: bindings, ignored listing per whitelist Feb 9 11:24:47.057: INFO: namespace e2e-tests-container-probe-w5qmq deletion completed in 6.459326994s • [SLOW TEST:165.670 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:24:47.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 9 11:24:47.304: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 9 11:24:47.377: INFO: Waiting for terminating namespaces to be deleted... Feb 9 11:24:47.385: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 9 11:24:47.409: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 9 11:24:47.409: INFO: Container coredns ready: true, restart count 0 Feb 9 11:24:47.409: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 9 11:24:47.409: INFO: Container kube-proxy ready: true, restart count 0 Feb 9 11:24:47.409: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 11:24:47.409: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 9 11:24:47.409: INFO: Container weave ready: true, restart count 0 Feb 9 11:24:47.409: INFO: Container weave-npc ready: true, restart count 0 Feb 9 11:24:47.409: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 9 11:24:47.409: INFO: Container coredns ready: true, restart count 0 Feb 9 11:24:47.409: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 11:24:47.409: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 11:24:47.409: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 9 11:24:47.518: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c775d8a6-4b2e-11ea-aa78-0242ac110005.15f1b8364f11769c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-v4hc5/filler-pod-c775d8a6-4b2e-11ea-aa78-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-c775d8a6-4b2e-11ea-aa78-0242ac110005.15f1b83762bada3d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c775d8a6-4b2e-11ea-aa78-0242ac110005.15f1b83805a69eba], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-c775d8a6-4b2e-11ea-aa78-0242ac110005.15f1b8382fddfaef], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f1b838a59f81e1], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:24:58.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-v4hc5" for this suite. Feb 9 11:25:06.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:25:07.076: INFO: namespace: e2e-tests-sched-pred-v4hc5, resource: bindings, ignored listing per whitelist Feb 9 11:25:07.107: INFO: namespace e2e-tests-sched-pred-v4hc5 deletion completed in 8.381951579s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.050 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:25:07.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 9 11:25:23.705: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:25:24.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-8m99s" for this suite. Feb 9 11:25:53.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:25:53.471: INFO: namespace: e2e-tests-replicaset-8m99s, resource: bindings, ignored listing per whitelist Feb 9 11:25:53.668: INFO: namespace e2e-tests-replicaset-8m99s deletion completed in 28.889805784s • [SLOW TEST:46.561 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:25:53.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:25:54.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-sv4xz" to be "success or failure" Feb 9 11:25:54.106: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.659936ms Feb 9 11:25:56.122: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035375019s Feb 9 11:25:58.139: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052543057s Feb 9 11:26:00.158: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070740532s Feb 9 11:26:02.173: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086628591s Feb 9 11:26:04.203: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116264694s STEP: Saw pod success Feb 9 11:26:04.203: INFO: Pod "downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:26:04.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:26:04.400: INFO: Waiting for pod downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005 to disappear Feb 9 11:26:04.430: INFO: Pod downwardapi-volume-ef130dcb-4b2e-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:26:04.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sv4xz" for this suite. Feb 9 11:26:10.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:26:10.734: INFO: namespace: e2e-tests-downward-api-sv4xz, resource: bindings, ignored listing per whitelist Feb 9 11:26:10.769: INFO: namespace e2e-tests-downward-api-sv4xz deletion completed in 6.306031974s • [SLOW TEST:17.100 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:26:10.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:26:23.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pgz6d" for this suite. Feb 9 11:27:07.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:27:07.380: INFO: namespace: e2e-tests-kubelet-test-pgz6d, resource: bindings, ignored listing per whitelist Feb 9 11:27:07.540: INFO: namespace e2e-tests-kubelet-test-pgz6d deletion completed in 44.32821091s • [SLOW TEST:56.771 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:27:07.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-pxw7 STEP: Creating a pod to test atomic-volume-subpath Feb 9 11:27:07.913: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pxw7" in namespace "e2e-tests-subpath-dxdwl" to be "success or failure" Feb 9 11:27:07.968: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.850765ms Feb 9 11:27:09.988: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074489011s Feb 9 11:27:12.018: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104708419s Feb 9 11:27:14.153: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239663961s Feb 9 11:27:16.199: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285796296s Feb 9 11:27:18.287: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.372940222s Feb 9 11:27:20.309: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.395000662s Feb 9 11:27:22.321: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.407189243s Feb 9 11:27:24.332: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.418611382s Feb 9 11:27:26.351: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 18.437097794s Feb 9 11:27:28.375: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 20.461455385s Feb 9 11:27:30.396: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 22.482056163s Feb 9 11:27:32.412: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 24.498489387s Feb 9 11:27:34.430: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 26.516107486s Feb 9 11:27:36.465: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 28.551221823s Feb 9 11:27:38.499: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 30.585534502s Feb 9 11:27:40.537: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 32.623658347s Feb 9 11:27:42.567: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Running", Reason="", readiness=false. Elapsed: 34.653306117s Feb 9 11:27:44.592: INFO: Pod "pod-subpath-test-configmap-pxw7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.67836006s STEP: Saw pod success Feb 9 11:27:44.592: INFO: Pod "pod-subpath-test-configmap-pxw7" satisfied condition "success or failure" Feb 9 11:27:44.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pxw7 container test-container-subpath-configmap-pxw7: STEP: delete the pod Feb 9 11:27:44.917: INFO: Waiting for pod pod-subpath-test-configmap-pxw7 to disappear Feb 9 11:27:44.923: INFO: Pod pod-subpath-test-configmap-pxw7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-pxw7 Feb 9 11:27:44.924: INFO: Deleting pod "pod-subpath-test-configmap-pxw7" in namespace "e2e-tests-subpath-dxdwl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:27:44.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dxdwl" for this suite. Feb 9 11:27:51.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:27:51.319: INFO: namespace: e2e-tests-subpath-dxdwl, resource: bindings, ignored listing per whitelist Feb 9 11:27:51.361: INFO: namespace e2e-tests-subpath-dxdwl deletion completed in 6.24811016s • [SLOW TEST:43.820 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:27:51.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8kkzb [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 9 11:27:51.726: INFO: Found 0 stateful pods, waiting for 3 Feb 9 11:28:02.064: INFO: Found 1 stateful pods, waiting for 3 Feb 9 11:28:11.804: INFO: Found 2 stateful pods, waiting for 3 Feb 9 11:28:21.874: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:28:21.874: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:28:21.874: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 9 11:28:31.740: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:28:31.740: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:28:31.740: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:28:31.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8kkzb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:28:32.440: INFO: stderr: "I0209 11:28:32.085911 1339 log.go:172] (0xc000138630) (0xc000627360) Create stream\nI0209 11:28:32.086654 1339 log.go:172] (0xc000138630) (0xc000627360) Stream added, broadcasting: 1\nI0209 11:28:32.096753 1339 log.go:172] (0xc000138630) Reply frame received for 1\nI0209 11:28:32.096802 1339 log.go:172] (0xc000138630) (0xc000627400) Create stream\nI0209 11:28:32.096816 1339 log.go:172] (0xc000138630) (0xc000627400) Stream added, broadcasting: 3\nI0209 11:28:32.098658 1339 log.go:172] (0xc000138630) Reply frame received for 3\nI0209 11:28:32.098700 1339 log.go:172] (0xc000138630) (0xc0000dc000) Create stream\nI0209 11:28:32.098716 1339 log.go:172] (0xc000138630) (0xc0000dc000) Stream added, broadcasting: 5\nI0209 11:28:32.099792 1339 log.go:172] (0xc000138630) Reply frame received for 5\nI0209 11:28:32.288582 1339 log.go:172] (0xc000138630) Data frame received for 3\nI0209 11:28:32.288727 1339 log.go:172] (0xc000627400) (3) Data frame handling\nI0209 11:28:32.288765 1339 log.go:172] (0xc000627400) (3) Data frame sent\nI0209 11:28:32.426182 1339 log.go:172] (0xc000138630) Data frame received for 1\nI0209 11:28:32.426287 1339 log.go:172] (0xc000627360) (1) Data frame handling\nI0209 11:28:32.426331 1339 log.go:172] (0xc000627360) (1) Data frame sent\nI0209 11:28:32.426436 1339 log.go:172] (0xc000138630) (0xc000627360) Stream removed, broadcasting: 1\nI0209 11:28:32.427126 1339 log.go:172] (0xc000138630) (0xc000627400) Stream removed, broadcasting: 3\nI0209 11:28:32.427439 1339 log.go:172] (0xc000138630) (0xc0000dc000) Stream removed, broadcasting: 5\nI0209 11:28:32.427508 1339 log.go:172] (0xc000138630) (0xc000627360) Stream removed, broadcasting: 1\nI0209 11:28:32.427516 1339 log.go:172] (0xc000138630) (0xc000627400) Stream removed, broadcasting: 3\nI0209 11:28:32.427521 1339 log.go:172] (0xc000138630) (0xc0000dc000) Stream removed, broadcasting: 5\nI0209 11:28:32.427790 1339 log.go:172] (0xc000138630) Go away received\n" Feb 9 11:28:32.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:28:32.440: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 9 11:28:42.649: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 9 11:28:53.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8kkzb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:28:54.363: INFO: stderr: "I0209 11:28:53.837655 1361 log.go:172] (0xc000138630) (0xc0006fa640) Create stream\nI0209 11:28:53.838081 1361 log.go:172] (0xc000138630) (0xc0006fa640) Stream added, broadcasting: 1\nI0209 11:28:53.855242 1361 log.go:172] (0xc000138630) Reply frame received for 1\nI0209 11:28:53.855578 1361 log.go:172] (0xc000138630) (0xc0005c8d20) Create stream\nI0209 11:28:53.855639 1361 log.go:172] (0xc000138630) (0xc0005c8d20) Stream added, broadcasting: 3\nI0209 11:28:53.857961 1361 log.go:172] (0xc000138630) Reply frame received for 3\nI0209 11:28:53.858018 1361 log.go:172] (0xc000138630) (0xc0005c8e60) Create stream\nI0209 11:28:53.858036 1361 log.go:172] (0xc000138630) (0xc0005c8e60) Stream added, broadcasting: 5\nI0209 11:28:53.861138 1361 log.go:172] (0xc000138630) Reply frame received for 5\nI0209 11:28:54.115930 1361 log.go:172] (0xc000138630) Data frame received for 3\nI0209 11:28:54.116153 1361 log.go:172] (0xc0005c8d20) (3) Data frame handling\nI0209 11:28:54.116189 1361 log.go:172] (0xc0005c8d20) (3) Data frame sent\nI0209 11:28:54.350571 1361 log.go:172] (0xc000138630) (0xc0005c8d20) Stream removed, broadcasting: 3\nI0209 11:28:54.350786 1361 log.go:172] (0xc000138630) Data frame received for 1\nI0209 11:28:54.350826 1361 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0209 11:28:54.350850 1361 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0209 11:28:54.350875 1361 log.go:172] (0xc000138630) (0xc0006fa640) Stream removed, broadcasting: 1\nI0209 11:28:54.350945 1361 log.go:172] (0xc000138630) (0xc0005c8e60) Stream removed, broadcasting: 5\nI0209 11:28:54.351056 1361 log.go:172] (0xc000138630) Go away received\nI0209 11:28:54.351755 1361 log.go:172] (0xc000138630) (0xc0006fa640) Stream removed, broadcasting: 1\nI0209 11:28:54.351776 1361 log.go:172] (0xc000138630) (0xc0005c8d20) Stream removed, broadcasting: 3\nI0209 11:28:54.351784 1361 log.go:172] (0xc000138630) (0xc0005c8e60) Stream removed, broadcasting: 5\n" Feb 9 11:28:54.363: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:28:54.363: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:28:54.663: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:28:54.663: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:28:54.663: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:28:54.663: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:04.686: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:29:04.686: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:04.686: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:14.694: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:29:14.694: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:14.694: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:24.971: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:29:24.971: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:34.685: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:29:34.685: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:29:44.912: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update STEP: Rolling back to a previous revision Feb 9 11:29:54.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8kkzb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:29:55.246: INFO: stderr: "I0209 11:29:54.934183 1382 log.go:172] (0xc00013a0b0) (0xc00071a640) Create stream\nI0209 11:29:54.934607 1382 log.go:172] (0xc00013a0b0) (0xc00071a640) Stream added, broadcasting: 1\nI0209 11:29:54.941584 1382 log.go:172] (0xc00013a0b0) Reply frame received for 1\nI0209 11:29:54.941629 1382 log.go:172] (0xc00013a0b0) (0xc00071a6e0) Create stream\nI0209 11:29:54.941644 1382 log.go:172] (0xc00013a0b0) (0xc00071a6e0) Stream added, broadcasting: 3\nI0209 11:29:54.944738 1382 log.go:172] (0xc00013a0b0) Reply frame received for 3\nI0209 11:29:54.944809 1382 log.go:172] (0xc00013a0b0) (0xc00071a780) Create stream\nI0209 11:29:54.944832 1382 log.go:172] (0xc00013a0b0) (0xc00071a780) Stream added, broadcasting: 5\nI0209 11:29:54.946447 1382 log.go:172] (0xc00013a0b0) Reply frame received for 5\nI0209 11:29:55.095778 1382 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0209 11:29:55.095893 1382 log.go:172] (0xc00071a6e0) (3) Data frame handling\nI0209 11:29:55.095917 1382 log.go:172] (0xc00071a6e0) (3) Data frame sent\nI0209 11:29:55.232325 1382 log.go:172] (0xc00013a0b0) Data frame received for 1\nI0209 11:29:55.232484 1382 log.go:172] (0xc00013a0b0) (0xc00071a780) Stream removed, broadcasting: 5\nI0209 11:29:55.232549 1382 log.go:172] (0xc00071a640) (1) Data frame handling\nI0209 11:29:55.232583 1382 log.go:172] (0xc00071a640) (1) Data frame sent\nI0209 11:29:55.232615 1382 log.go:172] (0xc00013a0b0) (0xc00071a6e0) Stream removed, broadcasting: 3\nI0209 11:29:55.232728 1382 log.go:172] (0xc00013a0b0) (0xc00071a640) Stream removed, broadcasting: 1\nI0209 11:29:55.232754 1382 log.go:172] (0xc00013a0b0) Go away received\nI0209 11:29:55.233373 1382 log.go:172] (0xc00013a0b0) (0xc00071a640) Stream removed, broadcasting: 1\nI0209 11:29:55.233401 1382 log.go:172] (0xc00013a0b0) (0xc00071a6e0) Stream removed, broadcasting: 3\nI0209 11:29:55.233412 1382 log.go:172] (0xc00013a0b0) (0xc00071a780) Stream removed, broadcasting: 5\n" Feb 9 11:29:55.246: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:29:55.246: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:30:05.340: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 9 11:30:15.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8kkzb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:30:16.211: INFO: stderr: "I0209 11:30:15.671446 1405 log.go:172] (0xc00071c370) (0xc0007b4640) Create stream\nI0209 11:30:15.671724 1405 log.go:172] (0xc00071c370) (0xc0007b4640) Stream added, broadcasting: 1\nI0209 11:30:15.679067 1405 log.go:172] (0xc00071c370) Reply frame received for 1\nI0209 11:30:15.679129 1405 log.go:172] (0xc00071c370) (0xc00065ec80) Create stream\nI0209 11:30:15.679169 1405 log.go:172] (0xc00071c370) (0xc00065ec80) Stream added, broadcasting: 3\nI0209 11:30:15.682839 1405 log.go:172] (0xc00071c370) Reply frame received for 3\nI0209 11:30:15.682882 1405 log.go:172] (0xc00071c370) (0xc0004e4000) Create stream\nI0209 11:30:15.682896 1405 log.go:172] (0xc00071c370) (0xc0004e4000) Stream added, broadcasting: 5\nI0209 11:30:15.686034 1405 log.go:172] (0xc00071c370) Reply frame received for 5\nI0209 11:30:15.834770 1405 log.go:172] (0xc00071c370) Data frame received for 3\nI0209 11:30:15.834882 1405 log.go:172] (0xc00065ec80) (3) Data frame handling\nI0209 11:30:15.834919 1405 log.go:172] (0xc00065ec80) (3) Data frame sent\nI0209 11:30:16.194535 1405 log.go:172] (0xc00071c370) (0xc00065ec80) Stream removed, broadcasting: 3\nI0209 11:30:16.194759 1405 log.go:172] (0xc00071c370) Data frame received for 1\nI0209 11:30:16.194791 1405 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0209 11:30:16.194823 1405 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0209 11:30:16.194848 1405 log.go:172] (0xc00071c370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0209 11:30:16.195156 1405 log.go:172] (0xc00071c370) (0xc0004e4000) Stream removed, broadcasting: 5\nI0209 11:30:16.195383 1405 log.go:172] (0xc00071c370) Go away received\nI0209 11:30:16.195552 1405 log.go:172] (0xc00071c370) (0xc0007b4640) Stream removed, broadcasting: 1\nI0209 11:30:16.195582 1405 log.go:172] (0xc00071c370) (0xc00065ec80) Stream removed, broadcasting: 3\nI0209 11:30:16.195596 1405 log.go:172] (0xc00071c370) (0xc0004e4000) Stream removed, broadcasting: 5\n" Feb 9 11:30:16.211: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:30:16.211: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:30:16.370: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:30:16.370: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:16.370: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:16.370: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:26.393: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:30:26.393: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:26.393: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:36.417: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:30:36.417: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:36.417: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:46.396: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:30:46.396: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:30:56.393: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update Feb 9 11:30:56.394: INFO: Waiting for Pod e2e-tests-statefulset-8kkzb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 9 11:31:06.514: INFO: Waiting for StatefulSet e2e-tests-statefulset-8kkzb/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 9 11:31:16.396: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8kkzb Feb 9 11:31:16.402: INFO: Scaling statefulset ss2 to 0 Feb 9 11:31:46.448: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:31:46.461: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:31:46.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8kkzb" for this suite. Feb 9 11:31:54.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:31:54.929: INFO: namespace: e2e-tests-statefulset-8kkzb, resource: bindings, ignored listing per whitelist Feb 9 11:31:54.929: INFO: namespace e2e-tests-statefulset-8kkzb deletion completed in 8.368306102s • [SLOW TEST:243.567 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:31:54.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:31:55.241: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 9 11:31:55.253: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bv92n/daemonsets","resourceVersion":"21080310"},"items":null} Feb 9 11:31:55.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bv92n/pods","resourceVersion":"21080310"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:31:55.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bv92n" for this suite. Feb 9 11:32:01.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:32:01.416: INFO: namespace: e2e-tests-daemonsets-bv92n, resource: bindings, ignored listing per whitelist Feb 9 11:32:01.538: INFO: namespace e2e-tests-daemonsets-bv92n deletion completed in 6.216133422s S [SKIPPING] [6.609 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:31:55.241: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:32:01.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-ca767038-4b2f-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 11:32:02.097: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-8vcqv" to be "success or failure" Feb 9 11:32:02.133: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.213449ms Feb 9 11:32:04.400: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302376113s Feb 9 11:32:06.429: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33202247s Feb 9 11:32:08.629: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531845958s Feb 9 11:32:10.661: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563341804s Feb 9 11:32:12.678: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58033417s Feb 9 11:32:14.801: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.703713904s STEP: Saw pod success Feb 9 11:32:14.801: INFO: Pod "pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:32:14.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 11:32:15.275: INFO: Waiting for pod pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005 to disappear Feb 9 11:32:15.447: INFO: Pod pod-projected-secrets-ca78d7df-4b2f-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:32:15.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8vcqv" for this suite. Feb 9 11:32:21.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:32:21.599: INFO: namespace: e2e-tests-projected-8vcqv, resource: bindings, ignored listing per whitelist Feb 9 11:32:21.689: INFO: namespace e2e-tests-projected-8vcqv deletion completed in 6.22802246s • [SLOW TEST:20.151 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:32:21.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 9 11:32:21.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5xj9z' Feb 9 11:32:23.949: INFO: stderr: "" Feb 9 11:32:23.949: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 9 11:32:24.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5xj9z' Feb 9 11:32:31.200: INFO: stderr: "" Feb 9 11:32:31.200: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:32:31.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5xj9z" for this suite. Feb 9 11:32:37.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:32:37.435: INFO: namespace: e2e-tests-kubectl-5xj9z, resource: bindings, ignored listing per whitelist Feb 9 11:32:37.510: INFO: namespace e2e-tests-kubectl-5xj9z deletion completed in 6.290527746s • [SLOW TEST:15.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:32:37.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dfb81ca3-4b2f-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:32:37.743: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-hn28h" to be "success or failure" Feb 9 11:32:37.772: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.414595ms Feb 9 11:32:39.987: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243771846s Feb 9 11:32:42.001: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257575314s Feb 9 11:32:44.018: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274170704s Feb 9 11:32:46.034: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290767108s Feb 9 11:32:48.053: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309210485s STEP: Saw pod success Feb 9 11:32:48.053: INFO: Pod "pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:32:48.059: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 9 11:32:48.677: INFO: Waiting for pod pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005 to disappear Feb 9 11:32:48.687: INFO: Pod pod-projected-configmaps-dfb9bdd8-4b2f-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:32:48.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hn28h" for this suite. Feb 9 11:32:54.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:32:54.981: INFO: namespace: e2e-tests-projected-hn28h, resource: bindings, ignored listing per whitelist Feb 9 11:32:55.015: INFO: namespace e2e-tests-projected-hn28h deletion completed in 6.305803785s • [SLOW TEST:17.505 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:32:55.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-mh9l4 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 9 11:32:55.317: INFO: Found 0 stateful pods, waiting for 3 Feb 9 11:33:05.334: INFO: Found 1 stateful pods, waiting for 3 Feb 9 11:33:15.340: INFO: Found 2 stateful pods, waiting for 3 Feb 9 11:33:25.343: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:33:25.343: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:33:25.343: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 9 11:33:35.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:33:35.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:33:35.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 9 11:33:35.412: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 9 11:33:45.483: INFO: Updating stateful set ss2 Feb 9 11:33:45.497: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:33:55.525: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 9 11:34:07.258: INFO: Found 2 stateful pods, waiting for 3 Feb 9 11:34:17.283: INFO: Found 2 stateful pods, waiting for 3 Feb 9 11:34:27.590: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:34:27.591: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:34:27.591: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 9 11:34:37.295: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:34:37.295: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:34:37.296: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 9 11:34:37.439: INFO: Updating stateful set ss2 Feb 9 11:34:37.575: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:34:47.603: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:34:58.664: INFO: Updating stateful set ss2 Feb 9 11:34:58.928: INFO: Waiting for StatefulSet e2e-tests-statefulset-mh9l4/ss2 to complete update Feb 9 11:34:58.929: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:35:08.958: INFO: Waiting for StatefulSet e2e-tests-statefulset-mh9l4/ss2 to complete update Feb 9 11:35:08.959: INFO: Waiting for Pod e2e-tests-statefulset-mh9l4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 9 11:35:18.959: INFO: Waiting for StatefulSet e2e-tests-statefulset-mh9l4/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 9 11:35:28.971: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mh9l4 Feb 9 11:35:28.977: INFO: Scaling statefulset ss2 to 0 Feb 9 11:36:09.139: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:36:09.148: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:36:09.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-mh9l4" for this suite. Feb 9 11:36:17.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:36:17.506: INFO: namespace: e2e-tests-statefulset-mh9l4, resource: bindings, ignored listing per whitelist Feb 9 11:36:17.535: INFO: namespace e2e-tests-statefulset-mh9l4 deletion completed in 8.211593953s • [SLOW TEST:202.519 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:36:17.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 9 11:36:17.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vtqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vtqh/configmaps/e2e-watch-test-watch-closed,UID:62d18394-4b30-11ea-a994-fa163e34d433,ResourceVersion:21080953,Generation:0,CreationTimestamp:2020-02-09 11:36:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 9 11:36:17.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vtqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vtqh/configmaps/e2e-watch-test-watch-closed,UID:62d18394-4b30-11ea-a994-fa163e34d433,ResourceVersion:21080954,Generation:0,CreationTimestamp:2020-02-09 11:36:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 9 11:36:17.771: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vtqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vtqh/configmaps/e2e-watch-test-watch-closed,UID:62d18394-4b30-11ea-a994-fa163e34d433,ResourceVersion:21080955,Generation:0,CreationTimestamp:2020-02-09 11:36:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 9 11:36:17.771: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vtqh,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vtqh/configmaps/e2e-watch-test-watch-closed,UID:62d18394-4b30-11ea-a994-fa163e34d433,ResourceVersion:21080956,Generation:0,CreationTimestamp:2020-02-09 11:36:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:36:17.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4vtqh" for this suite. Feb 9 11:36:25.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:36:25.900: INFO: namespace: e2e-tests-watch-4vtqh, resource: bindings, ignored listing per whitelist Feb 9 11:36:26.040: INFO: namespace e2e-tests-watch-4vtqh deletion completed in 8.264256432s • [SLOW TEST:8.505 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:36:26.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 9 11:36:36.276: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-67e8f141-4b30-11ea-aa78-0242ac110005,GenerateName:,Namespace:e2e-tests-events-d5kt7,SelfLink:/api/v1/namespaces/e2e-tests-events-d5kt7/pods/send-events-67e8f141-4b30-11ea-aa78-0242ac110005,UID:67ea0e44-4b30-11ea-a994-fa163e34d433,ResourceVersion:21080993,Generation:0,CreationTimestamp:2020-02-09 11:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 205434060,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-j8w2z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j8w2z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-j8w2z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012336c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012336e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:36:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:36:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:36:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:36:26 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-09 11:36:26 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-09 11:36:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d62104d8a6a1b28bb74606e93a4dcf51801187a8e80a2c866bc98cc29ffd6e5f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 9 11:36:38.294: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 9 11:36:40.311: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:36:40.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-d5kt7" for this suite. Feb 9 11:37:24.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:37:24.594: INFO: namespace: e2e-tests-events-d5kt7, resource: bindings, ignored listing per whitelist Feb 9 11:37:24.696: INFO: namespace e2e-tests-events-d5kt7 deletion completed in 44.306798955s • [SLOW TEST:58.655 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:37:24.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:37:25.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-wp2zm" to be "success or failure" Feb 9 11:37:25.596: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2693ms Feb 9 11:37:27.611: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021082852s Feb 9 11:37:29.627: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037076745s Feb 9 11:37:31.915: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.325688577s Feb 9 11:37:33.928: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338854389s Feb 9 11:37:35.952: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.362917356s Feb 9 11:37:39.531: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.941838059s STEP: Saw pod success Feb 9 11:37:39.532: INFO: Pod "downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:37:39.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:37:40.126: INFO: Waiting for pod downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005 to disappear Feb 9 11:37:40.140: INFO: Pod downwardapi-volume-8b4a757d-4b30-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:37:40.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wp2zm" for this suite. Feb 9 11:37:46.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:37:46.322: INFO: namespace: e2e-tests-projected-wp2zm, resource: bindings, ignored listing per whitelist Feb 9 11:37:46.411: INFO: namespace e2e-tests-projected-wp2zm deletion completed in 6.260086637s • [SLOW TEST:21.714 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:37:46.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 9 11:37:46.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lzjwg' Feb 9 11:37:47.147: INFO: stderr: "" Feb 9 11:37:47.147: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 9 11:37:48.665: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:48.665: INFO: Found 0 / 1 Feb 9 11:37:49.171: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:49.171: INFO: Found 0 / 1 Feb 9 11:37:50.173: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:50.174: INFO: Found 0 / 1 Feb 9 11:37:51.166: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:51.167: INFO: Found 0 / 1 Feb 9 11:37:53.604: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:53.604: INFO: Found 0 / 1 Feb 9 11:37:54.396: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:54.397: INFO: Found 0 / 1 Feb 9 11:37:55.167: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:55.167: INFO: Found 0 / 1 Feb 9 11:37:56.165: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:56.165: INFO: Found 0 / 1 Feb 9 11:37:57.167: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:57.168: INFO: Found 0 / 1 Feb 9 11:37:58.215: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:58.215: INFO: Found 1 / 1 Feb 9 11:37:58.215: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 9 11:37:58.226: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:58.226: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 9 11:37:58.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-79jch --namespace=e2e-tests-kubectl-lzjwg -p {"metadata":{"annotations":{"x":"y"}}}' Feb 9 11:37:58.507: INFO: stderr: "" Feb 9 11:37:58.507: INFO: stdout: "pod/redis-master-79jch patched\n" STEP: checking annotations Feb 9 11:37:58.525: INFO: Selector matched 1 pods for map[app:redis] Feb 9 11:37:58.525: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:37:58.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lzjwg" for this suite. Feb 9 11:38:22.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:38:22.683: INFO: namespace: e2e-tests-kubectl-lzjwg, resource: bindings, ignored listing per whitelist Feb 9 11:38:22.829: INFO: namespace e2e-tests-kubectl-lzjwg deletion completed in 24.292245128s • [SLOW TEST:36.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:38:22.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 9 11:38:22.993: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:38:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g78q7" for this suite. Feb 9 11:38:29.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:38:29.293: INFO: namespace: e2e-tests-kubectl-g78q7, resource: bindings, ignored listing per whitelist Feb 9 11:38:29.379: INFO: namespace e2e-tests-kubectl-g78q7 deletion completed in 6.235996381s • [SLOW TEST:6.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:38:29.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:38:35.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-822ph" for this suite. Feb 9 11:38:42.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:38:42.237: INFO: namespace: e2e-tests-namespaces-822ph, resource: bindings, ignored listing per whitelist Feb 9 11:38:42.242: INFO: namespace e2e-tests-namespaces-822ph deletion completed in 6.247554166s STEP: Destroying namespace "e2e-tests-nsdeletetest-22nqd" for this suite. Feb 9 11:38:42.249: INFO: Namespace e2e-tests-nsdeletetest-22nqd was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-7n4xr" for this suite. Feb 9 11:38:48.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:38:48.479: INFO: namespace: e2e-tests-nsdeletetest-7n4xr, resource: bindings, ignored listing per whitelist Feb 9 11:38:48.734: INFO: namespace e2e-tests-nsdeletetest-7n4xr deletion completed in 6.485466593s • [SLOW TEST:19.354 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:38:48.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 9 11:38:48.997: INFO: Waiting up to 5m0s for pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005" in namespace "e2e-tests-containers-j69gc" to be "success or failure" Feb 9 11:38:49.011: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.336895ms Feb 9 11:38:51.049: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052565198s Feb 9 11:38:53.088: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091400023s Feb 9 11:38:55.276: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279318572s Feb 9 11:38:57.286: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289459094s Feb 9 11:38:59.312: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.314805222s STEP: Saw pod success Feb 9 11:38:59.312: INFO: Pod "client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:38:59.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:39:00.629: INFO: Waiting for pod client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005 to disappear Feb 9 11:39:00.784: INFO: Pod client-containers-bd02b0fb-4b30-11ea-aa78-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:39:00.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-j69gc" for this suite. Feb 9 11:39:07.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:39:07.125: INFO: namespace: e2e-tests-containers-j69gc, resource: bindings, ignored listing per whitelist Feb 9 11:39:07.219: INFO: namespace e2e-tests-containers-j69gc deletion completed in 6.211305693s • [SLOW TEST:18.485 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:39:07.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 9 11:39:07.415: INFO: Waiting up to 5m0s for pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-gb66g" to be "success or failure" Feb 9 11:39:07.428: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.080108ms Feb 9 11:39:09.440: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025170545s Feb 9 11:39:11.454: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039153188s Feb 9 11:39:13.946: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531540487s Feb 9 11:39:17.628: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213086111s Feb 9 11:39:19.658: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.24283981s Feb 9 11:39:21.672: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.257144662s STEP: Saw pod success Feb 9 11:39:21.672: INFO: Pod "pod-c7fa8042-4b30-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:39:21.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c7fa8042-4b30-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:39:22.260: INFO: Waiting for pod pod-c7fa8042-4b30-11ea-aa78-0242ac110005 to disappear Feb 9 11:39:22.279: INFO: Pod pod-c7fa8042-4b30-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:39:22.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gb66g" for this suite. Feb 9 11:39:28.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:39:28.846: INFO: namespace: e2e-tests-emptydir-gb66g, resource: bindings, ignored listing per whitelist Feb 9 11:39:28.851: INFO: namespace e2e-tests-emptydir-gb66g deletion completed in 6.55711034s • [SLOW TEST:21.632 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:39:28.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d4e33cf9-4b30-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:39:29.060: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-hq6f7" to be "success or failure" Feb 9 11:39:29.079: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.431842ms Feb 9 11:39:31.397: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336440503s Feb 9 11:39:33.432: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37156467s Feb 9 11:39:36.342: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.281683067s Feb 9 11:39:38.355: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.294179569s Feb 9 11:39:40.379: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.318146672s Feb 9 11:39:42.401: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.340493271s STEP: Saw pod success Feb 9 11:39:42.401: INFO: Pod "pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:39:42.410: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 9 11:39:43.548: INFO: Waiting for pod pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005 to disappear Feb 9 11:39:43.971: INFO: Pod pod-projected-configmaps-d4e44304-4b30-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:39:43.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hq6f7" for this suite. Feb 9 11:39:50.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:39:50.178: INFO: namespace: e2e-tests-projected-hq6f7, resource: bindings, ignored listing per whitelist Feb 9 11:39:50.363: INFO: namespace e2e-tests-projected-hq6f7 deletion completed in 6.378427099s • [SLOW TEST:21.512 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:39:50.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:39:50.588: INFO: Creating ReplicaSet my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005 Feb 9 11:39:50.701: INFO: Pod name my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005: Found 0 pods out of 1 Feb 9 11:39:55.714: INFO: Pod name my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005: Found 1 pods out of 1 Feb 9 11:39:55.714: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005" is running Feb 9 11:40:01.741: INFO: Pod "my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005-wwfq8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 11:39:50 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 11:39:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 11:39:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 11:39:50 +0000 UTC Reason: Message:}]) Feb 9 11:40:01.741: INFO: Trying to dial the pod Feb 9 11:40:06.784: INFO: Controller my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005: Got expected result from replica 1 [my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005-wwfq8]: "my-hostname-basic-e1bb4233-4b30-11ea-aa78-0242ac110005-wwfq8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:40:06.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gnm2v" for this suite. Feb 9 11:40:14.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:40:14.930: INFO: namespace: e2e-tests-replicaset-gnm2v, resource: bindings, ignored listing per whitelist Feb 9 11:40:15.055: INFO: namespace e2e-tests-replicaset-gnm2v deletion completed in 8.263474924s • [SLOW TEST:24.692 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:40:15.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:40:15.304: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 9 11:40:20.320: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 9 11:40:30.340: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 9 11:40:32.350: INFO: Creating deployment "test-rollover-deployment" Feb 9 11:40:32.502: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 9 11:40:34.557: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 9 11:40:34.587: INFO: Ensure that both replica sets have 1 created replica Feb 9 11:40:34.614: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 9 11:40:34.658: INFO: Updating deployment test-rollover-deployment Feb 9 11:40:34.658: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 9 11:40:37.064: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 9 11:40:37.112: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 9 11:40:37.130: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:37.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:39.146: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:39.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:41.151: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:41.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:44.001: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:44.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:45.180: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:45.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:47.206: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:47.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845236, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:49.167: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:49.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845247, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:51.147: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:51.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845247, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:53.156: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:53.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845247, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:55.155: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:55.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845247, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:57.163: INFO: all replica sets need to contain the pod-template-hash label Feb 9 11:40:57.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845233, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845247, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716845232, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 11:40:59.730: INFO: Feb 9 11:40:59.731: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 9 11:40:59.981: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-sqwtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqwtg/deployments/test-rollover-deployment,UID:faa01188-4b30-11ea-a994-fa163e34d433,ResourceVersion:21081580,Generation:2,CreationTimestamp:2020-02-09 11:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-09 11:40:33 +0000 UTC 2020-02-09 11:40:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-09 11:40:58 +0000 UTC 2020-02-09 11:40:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 9 11:40:59.992: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-sqwtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqwtg/replicasets/test-rollover-deployment-5b8479fdb6,UID:fc00cb04-4b30-11ea-a994-fa163e34d433,ResourceVersion:21081571,Generation:2,CreationTimestamp:2020-02-09 11:40:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment faa01188-4b30-11ea-a994-fa163e34d433 0xc001f9fbe7 0xc001f9fbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 9 11:40:59.992: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 9 11:40:59.992: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-sqwtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqwtg/replicasets/test-rollover-controller,UID:f071ddfb-4b30-11ea-a994-fa163e34d433,ResourceVersion:21081579,Generation:2,CreationTimestamp:2020-02-09 11:40:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment faa01188-4b30-11ea-a994-fa163e34d433 0xc001f9f907 0xc001f9f908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 9 11:40:59.993: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-sqwtg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqwtg/replicasets/test-rollover-deployment-58494b7559,UID:fac64b00-4b30-11ea-a994-fa163e34d433,ResourceVersion:21081535,Generation:2,CreationTimestamp:2020-02-09 11:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment faa01188-4b30-11ea-a994-fa163e34d433 0xc001f9f9d7 0xc001f9f9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 9 11:41:00.011: INFO: Pod "test-rollover-deployment-5b8479fdb6-46f5n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-46f5n,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-sqwtg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sqwtg/pods/test-rollover-deployment-5b8479fdb6-46f5n,UID:fcd3d50b-4b30-11ea-a994-fa163e34d433,ResourceVersion:21081556,Generation:0,CreationTimestamp:2020-02-09 11:40:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 fc00cb04-4b30-11ea-a994-fa163e34d433 0xc001ff7677 0xc001ff7678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9fzzt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fzzt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9fzzt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ff7750} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ff7770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:40:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:40:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:40:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:40:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-09 11:40:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-09 11:40:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1cc139c5bc5ca907ab5fd846f2523aec9dccb00f88695e9c6e65b9ebc4987fde}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:41:00.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sqwtg" for this suite. Feb 9 11:41:08.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:41:08.224: INFO: namespace: e2e-tests-deployment-sqwtg, resource: bindings, ignored listing per whitelist Feb 9 11:41:08.286: INFO: namespace e2e-tests-deployment-sqwtg deletion completed in 8.25789366s • [SLOW TEST:53.230 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:41:08.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 9 11:41:10.216: INFO: Waiting up to 5m0s for pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-lgr9x" to be "success or failure" Feb 9 11:41:10.229: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.027361ms Feb 9 11:41:12.272: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055888991s Feb 9 11:41:15.772: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.556385418s Feb 9 11:41:17.812: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.596267353s Feb 9 11:41:19.895: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.678732192s Feb 9 11:41:22.355: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.139348704s STEP: Saw pod success Feb 9 11:41:22.356: INFO: Pod "pod-111a0e0f-4b31-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:41:22.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-111a0e0f-4b31-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:41:22.568: INFO: Waiting for pod pod-111a0e0f-4b31-11ea-aa78-0242ac110005 to disappear Feb 9 11:41:22.583: INFO: Pod pod-111a0e0f-4b31-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:41:22.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lgr9x" for this suite. Feb 9 11:41:28.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:41:28.790: INFO: namespace: e2e-tests-emptydir-lgr9x, resource: bindings, ignored listing per whitelist Feb 9 11:41:28.829: INFO: namespace e2e-tests-emptydir-lgr9x deletion completed in 6.228547855s • [SLOW TEST:20.542 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:41:28.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:41:29.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-grlf4" to be "success or failure" Feb 9 11:41:29.300: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 235.272928ms Feb 9 11:41:31.400: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335830675s Feb 9 11:41:33.428: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363449227s Feb 9 11:41:35.914: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849749497s Feb 9 11:41:37.928: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863577518s Feb 9 11:41:39.970: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.905610946s STEP: Saw pod success Feb 9 11:41:39.970: INFO: Pod "downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:41:39.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:41:40.077: INFO: Waiting for pod downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005 to disappear Feb 9 11:41:40.089: INFO: Pod downwardapi-volume-1c6ab27d-4b31-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:41:40.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grlf4" for this suite. Feb 9 11:41:46.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:41:46.688: INFO: namespace: e2e-tests-projected-grlf4, resource: bindings, ignored listing per whitelist Feb 9 11:41:46.717: INFO: namespace e2e-tests-projected-grlf4 deletion completed in 6.595897444s • [SLOW TEST:17.888 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:41:46.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 9 11:41:47.113: INFO: Waiting up to 5m0s for pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-v7zhf" to be "success or failure" Feb 9 11:41:47.371: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 257.5582ms Feb 9 11:41:49.790: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676486218s Feb 9 11:41:51.836: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723092642s Feb 9 11:41:54.319: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.205737598s Feb 9 11:41:56.384: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.270923748s Feb 9 11:41:58.401: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.287563309s STEP: Saw pod success Feb 9 11:41:58.401: INFO: Pod "pod-272cad7b-4b31-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:41:58.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-272cad7b-4b31-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:41:58.544: INFO: Waiting for pod pod-272cad7b-4b31-11ea-aa78-0242ac110005 to disappear Feb 9 11:41:58.557: INFO: Pod pod-272cad7b-4b31-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:41:58.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-v7zhf" for this suite. Feb 9 11:42:05.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:42:05.840: INFO: namespace: e2e-tests-emptydir-v7zhf, resource: bindings, ignored listing per whitelist Feb 9 11:42:05.883: INFO: namespace e2e-tests-emptydir-v7zhf deletion completed in 7.305087502s • [SLOW TEST:19.166 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:42:05.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xfk4t [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xfk4t STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xfk4t Feb 9 11:42:06.147: INFO: Found 0 stateful pods, waiting for 1 Feb 9 11:42:16.261: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 9 11:42:16.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:42:17.032: INFO: stderr: "I0209 11:42:16.640194 1539 log.go:172] (0xc00013a6e0) (0xc000712640) Create stream\nI0209 11:42:16.640539 1539 log.go:172] (0xc00013a6e0) (0xc000712640) Stream added, broadcasting: 1\nI0209 11:42:16.656207 1539 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0209 11:42:16.656327 1539 log.go:172] (0xc00013a6e0) (0xc0005ecf00) Create stream\nI0209 11:42:16.656354 1539 log.go:172] (0xc00013a6e0) (0xc0005ecf00) Stream added, broadcasting: 3\nI0209 11:42:16.658887 1539 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0209 11:42:16.658967 1539 log.go:172] (0xc00013a6e0) (0xc00029e000) Create stream\nI0209 11:42:16.658987 1539 log.go:172] (0xc00013a6e0) (0xc00029e000) Stream added, broadcasting: 5\nI0209 11:42:16.661713 1539 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0209 11:42:16.900699 1539 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0209 11:42:16.900768 1539 log.go:172] (0xc0005ecf00) (3) Data frame handling\nI0209 11:42:16.900785 1539 log.go:172] (0xc0005ecf00) (3) Data frame sent\nI0209 11:42:17.021475 1539 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0209 11:42:17.021637 1539 log.go:172] (0xc00013a6e0) (0xc0005ecf00) Stream removed, broadcasting: 3\nI0209 11:42:17.021713 1539 log.go:172] (0xc000712640) (1) Data frame handling\nI0209 11:42:17.021747 1539 log.go:172] (0xc000712640) (1) Data frame sent\nI0209 11:42:17.021756 1539 log.go:172] (0xc00013a6e0) (0xc000712640) Stream removed, broadcasting: 1\nI0209 11:42:17.021814 1539 log.go:172] (0xc00013a6e0) (0xc00029e000) Stream removed, broadcasting: 5\nI0209 11:42:17.022021 1539 log.go:172] (0xc00013a6e0) Go away received\nI0209 11:42:17.022312 1539 log.go:172] (0xc00013a6e0) (0xc000712640) Stream removed, broadcasting: 1\nI0209 11:42:17.022327 1539 log.go:172] (0xc00013a6e0) (0xc0005ecf00) Stream removed, broadcasting: 3\nI0209 11:42:17.022338 1539 log.go:172] (0xc00013a6e0) (0xc00029e000) Stream removed, broadcasting: 5\n" Feb 9 11:42:17.033: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:42:17.033: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:42:17.054: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 9 11:42:27.070: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:42:27.070: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:42:27.110: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:42:27.110: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:42:27.110: INFO: Feb 9 11:42:27.110: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 9 11:42:28.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98793409s Feb 9 11:42:30.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.475242526s Feb 9 11:42:31.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.84565848s Feb 9 11:42:32.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.685320117s Feb 9 11:42:33.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.657000599s Feb 9 11:42:36.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.353297177s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xfk4t Feb 9 11:42:38.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:42:41.234: INFO: stderr: "I0209 11:42:39.454236 1561 log.go:172] (0xc0001386e0) (0xc00065f2c0) Create stream\nI0209 11:42:39.454638 1561 log.go:172] (0xc0001386e0) (0xc00065f2c0) Stream added, broadcasting: 1\nI0209 11:42:39.462978 1561 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0209 11:42:39.463223 1561 log.go:172] (0xc0001386e0) (0xc000732000) Create stream\nI0209 11:42:39.463302 1561 log.go:172] (0xc0001386e0) (0xc000732000) Stream added, broadcasting: 3\nI0209 11:42:39.465882 1561 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0209 11:42:39.465934 1561 log.go:172] (0xc0001386e0) (0xc000270000) Create stream\nI0209 11:42:39.465949 1561 log.go:172] (0xc0001386e0) (0xc000270000) Stream added, broadcasting: 5\nI0209 11:42:39.467638 1561 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0209 11:42:40.405569 1561 log.go:172] (0xc0001386e0) Data frame received for 3\nI0209 11:42:40.405710 1561 log.go:172] (0xc000732000) (3) Data frame handling\nI0209 11:42:40.405736 1561 log.go:172] (0xc000732000) (3) Data frame sent\nI0209 11:42:41.225725 1561 log.go:172] (0xc0001386e0) (0xc000732000) Stream removed, broadcasting: 3\nI0209 11:42:41.225883 1561 log.go:172] (0xc0001386e0) Data frame received for 1\nI0209 11:42:41.225908 1561 log.go:172] (0xc00065f2c0) (1) Data frame handling\nI0209 11:42:41.225922 1561 log.go:172] (0xc00065f2c0) (1) Data frame sent\nI0209 11:42:41.225932 1561 log.go:172] (0xc0001386e0) (0xc00065f2c0) Stream removed, broadcasting: 1\nI0209 11:42:41.226013 1561 log.go:172] (0xc0001386e0) (0xc000270000) Stream removed, broadcasting: 5\nI0209 11:42:41.226103 1561 log.go:172] (0xc0001386e0) Go away received\nI0209 11:42:41.226410 1561 log.go:172] (0xc0001386e0) (0xc00065f2c0) Stream removed, broadcasting: 1\nI0209 11:42:41.226425 1561 log.go:172] (0xc0001386e0) (0xc000732000) Stream removed, broadcasting: 3\nI0209 11:42:41.226432 1561 log.go:172] (0xc0001386e0) (0xc000270000) Stream removed, broadcasting: 5\n" Feb 9 11:42:41.234: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:42:41.234: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:42:41.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:42:41.508: INFO: rc: 1 Feb 9 11:42:41.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000789f80 exit status 1 true [0xc0011decb0 0xc0011decc8 0xc0011dece0] [0xc0011decb0 0xc0011decc8 0xc0011dece0] [0xc0011decc0 0xc0011decd8] [0x935700 0x935700] 0xc001e691a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 9 11:42:51.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:42:52.218: INFO: stderr: "I0209 11:42:51.789675 1604 log.go:172] (0xc000150840) (0xc0005c3360) Create stream\nI0209 11:42:51.789893 1604 log.go:172] (0xc000150840) (0xc0005c3360) Stream added, broadcasting: 1\nI0209 11:42:51.799959 1604 log.go:172] (0xc000150840) Reply frame received for 1\nI0209 11:42:51.800262 1604 log.go:172] (0xc000150840) (0xc0005c3400) Create stream\nI0209 11:42:51.800294 1604 log.go:172] (0xc000150840) (0xc0005c3400) Stream added, broadcasting: 3\nI0209 11:42:51.803781 1604 log.go:172] (0xc000150840) Reply frame received for 3\nI0209 11:42:51.803913 1604 log.go:172] (0xc000150840) (0xc0007cc000) Create stream\nI0209 11:42:51.803965 1604 log.go:172] (0xc000150840) (0xc0007cc000) Stream added, broadcasting: 5\nI0209 11:42:51.807056 1604 log.go:172] (0xc000150840) Reply frame received for 5\nI0209 11:42:51.983701 1604 log.go:172] (0xc000150840) Data frame received for 5\nI0209 11:42:51.983888 1604 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0209 11:42:51.983929 1604 log.go:172] (0xc0007cc000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0209 11:42:51.983981 1604 log.go:172] (0xc000150840) Data frame received for 3\nI0209 11:42:51.983997 1604 log.go:172] (0xc0005c3400) (3) Data frame handling\nI0209 11:42:51.984019 1604 log.go:172] (0xc0005c3400) (3) Data frame sent\nI0209 11:42:52.199634 1604 log.go:172] (0xc000150840) Data frame received for 1\nI0209 11:42:52.199812 1604 log.go:172] (0xc0005c3360) (1) Data frame handling\nI0209 11:42:52.199865 1604 log.go:172] (0xc0005c3360) (1) Data frame sent\nI0209 11:42:52.201017 1604 log.go:172] (0xc000150840) (0xc0005c3360) Stream removed, broadcasting: 1\nI0209 11:42:52.201130 1604 log.go:172] (0xc000150840) (0xc0005c3400) Stream removed, broadcasting: 3\nI0209 11:42:52.201230 1604 log.go:172] (0xc000150840) (0xc0007cc000) Stream removed, broadcasting: 5\nI0209 11:42:52.201306 1604 log.go:172] (0xc000150840) Go away received\nI0209 11:42:52.201802 1604 log.go:172] (0xc000150840) (0xc0005c3360) Stream removed, broadcasting: 1\nI0209 11:42:52.201826 1604 log.go:172] (0xc000150840) (0xc0005c3400) Stream removed, broadcasting: 3\nI0209 11:42:52.201836 1604 log.go:172] (0xc000150840) (0xc0007cc000) Stream removed, broadcasting: 5\n" Feb 9 11:42:52.218: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:42:52.218: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:42:52.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:42:52.931: INFO: stderr: "I0209 11:42:52.404220 1627 log.go:172] (0xc000704370) (0xc000724640) Create stream\nI0209 11:42:52.404437 1627 log.go:172] (0xc000704370) (0xc000724640) Stream added, broadcasting: 1\nI0209 11:42:52.408643 1627 log.go:172] (0xc000704370) Reply frame received for 1\nI0209 11:42:52.408742 1627 log.go:172] (0xc000704370) (0xc00064ed20) Create stream\nI0209 11:42:52.408751 1627 log.go:172] (0xc000704370) (0xc00064ed20) Stream added, broadcasting: 3\nI0209 11:42:52.409905 1627 log.go:172] (0xc000704370) Reply frame received for 3\nI0209 11:42:52.409949 1627 log.go:172] (0xc000704370) (0xc000702000) Create stream\nI0209 11:42:52.409963 1627 log.go:172] (0xc000704370) (0xc000702000) Stream added, broadcasting: 5\nI0209 11:42:52.410865 1627 log.go:172] (0xc000704370) Reply frame received for 5\nI0209 11:42:52.713285 1627 log.go:172] (0xc000704370) Data frame received for 5\nI0209 11:42:52.713473 1627 log.go:172] (0xc000702000) (5) Data frame handling\nI0209 11:42:52.713502 1627 log.go:172] (0xc000702000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0209 11:42:52.713568 1627 log.go:172] (0xc000704370) Data frame received for 3\nI0209 11:42:52.713590 1627 log.go:172] (0xc00064ed20) (3) Data frame handling\nI0209 11:42:52.713605 1627 log.go:172] (0xc00064ed20) (3) Data frame sent\nI0209 11:42:52.917165 1627 log.go:172] (0xc000704370) Data frame received for 1\nI0209 11:42:52.917497 1627 log.go:172] (0xc000704370) (0xc00064ed20) Stream removed, broadcasting: 3\nI0209 11:42:52.917576 1627 log.go:172] (0xc000724640) (1) Data frame handling\nI0209 11:42:52.917604 1627 log.go:172] (0xc000724640) (1) Data frame sent\nI0209 11:42:52.917611 1627 log.go:172] (0xc000704370) (0xc000724640) Stream removed, broadcasting: 1\nI0209 11:42:52.918598 1627 log.go:172] (0xc000704370) (0xc000702000) Stream removed, broadcasting: 5\nI0209 11:42:52.918716 1627 log.go:172] (0xc000704370) (0xc000724640) Stream removed, broadcasting: 1\nI0209 11:42:52.918729 1627 log.go:172] (0xc000704370) (0xc00064ed20) Stream removed, broadcasting: 3\nI0209 11:42:52.918736 1627 log.go:172] (0xc000704370) (0xc000702000) Stream removed, broadcasting: 5\n" Feb 9 11:42:52.931: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 9 11:42:52.931: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 9 11:42:52.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:42:52.950: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 9 11:42:52.950: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 9 11:42:52.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:42:53.583: INFO: stderr: "I0209 11:42:53.199559 1649 log.go:172] (0xc000710370) (0xc000786640) Create stream\nI0209 11:42:53.199860 1649 log.go:172] (0xc000710370) (0xc000786640) Stream added, broadcasting: 1\nI0209 11:42:53.204889 1649 log.go:172] (0xc000710370) Reply frame received for 1\nI0209 11:42:53.205051 1649 log.go:172] (0xc000710370) (0xc0005eec80) Create stream\nI0209 11:42:53.205070 1649 log.go:172] (0xc000710370) (0xc0005eec80) Stream added, broadcasting: 3\nI0209 11:42:53.207615 1649 log.go:172] (0xc000710370) Reply frame received for 3\nI0209 11:42:53.207660 1649 log.go:172] (0xc000710370) (0xc000660000) Create stream\nI0209 11:42:53.207671 1649 log.go:172] (0xc000710370) (0xc000660000) Stream added, broadcasting: 5\nI0209 11:42:53.209094 1649 log.go:172] (0xc000710370) Reply frame received for 5\nI0209 11:42:53.338120 1649 log.go:172] (0xc000710370) Data frame received for 3\nI0209 11:42:53.338344 1649 log.go:172] (0xc0005eec80) (3) Data frame handling\nI0209 11:42:53.338373 1649 log.go:172] (0xc0005eec80) (3) Data frame sent\nI0209 11:42:53.563228 1649 log.go:172] (0xc000710370) Data frame received for 1\nI0209 11:42:53.563351 1649 log.go:172] (0xc000786640) (1) Data frame handling\nI0209 11:42:53.563405 1649 log.go:172] (0xc000786640) (1) Data frame sent\nI0209 11:42:53.564292 1649 log.go:172] (0xc000710370) (0xc000786640) Stream removed, broadcasting: 1\nI0209 11:42:53.564429 1649 log.go:172] (0xc000710370) (0xc0005eec80) Stream removed, broadcasting: 3\nI0209 11:42:53.564577 1649 log.go:172] (0xc000710370) (0xc000660000) Stream removed, broadcasting: 5\nI0209 11:42:53.564836 1649 log.go:172] (0xc000710370) (0xc000786640) Stream removed, broadcasting: 1\nI0209 11:42:53.564854 1649 log.go:172] (0xc000710370) (0xc0005eec80) Stream removed, broadcasting: 3\nI0209 11:42:53.564874 1649 log.go:172] (0xc000710370) (0xc000660000) Stream removed, broadcasting: 5\n" Feb 9 11:42:53.583: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:42:53.583: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:42:53.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:42:54.399: INFO: stderr: "I0209 11:42:53.889564 1671 log.go:172] (0xc000138840) (0xc000738640) Create stream\nI0209 11:42:53.890014 1671 log.go:172] (0xc000138840) (0xc000738640) Stream added, broadcasting: 1\nI0209 11:42:53.907623 1671 log.go:172] (0xc000138840) Reply frame received for 1\nI0209 11:42:53.907727 1671 log.go:172] (0xc000138840) (0xc0005b2d20) Create stream\nI0209 11:42:53.907741 1671 log.go:172] (0xc000138840) (0xc0005b2d20) Stream added, broadcasting: 3\nI0209 11:42:53.909096 1671 log.go:172] (0xc000138840) Reply frame received for 3\nI0209 11:42:53.909134 1671 log.go:172] (0xc000138840) (0xc0005b2e60) Create stream\nI0209 11:42:53.909143 1671 log.go:172] (0xc000138840) (0xc0005b2e60) Stream added, broadcasting: 5\nI0209 11:42:53.911207 1671 log.go:172] (0xc000138840) Reply frame received for 5\nI0209 11:42:54.218259 1671 log.go:172] (0xc000138840) Data frame received for 3\nI0209 11:42:54.218330 1671 log.go:172] (0xc0005b2d20) (3) Data frame handling\nI0209 11:42:54.218354 1671 log.go:172] (0xc0005b2d20) (3) Data frame sent\nI0209 11:42:54.389835 1671 log.go:172] (0xc000138840) Data frame received for 1\nI0209 11:42:54.389999 1671 log.go:172] (0xc000138840) (0xc0005b2d20) Stream removed, broadcasting: 3\nI0209 11:42:54.390040 1671 log.go:172] (0xc000738640) (1) Data frame handling\nI0209 11:42:54.390051 1671 log.go:172] (0xc000738640) (1) Data frame sent\nI0209 11:42:54.390057 1671 log.go:172] (0xc000138840) (0xc000738640) Stream removed, broadcasting: 1\nI0209 11:42:54.390588 1671 log.go:172] (0xc000138840) (0xc0005b2e60) Stream removed, broadcasting: 5\nI0209 11:42:54.390630 1671 log.go:172] (0xc000138840) (0xc000738640) Stream removed, broadcasting: 1\nI0209 11:42:54.390646 1671 log.go:172] (0xc000138840) (0xc0005b2d20) Stream removed, broadcasting: 3\nI0209 11:42:54.390683 1671 log.go:172] (0xc000138840) (0xc0005b2e60) Stream removed, broadcasting: 5\nI0209 11:42:54.390926 1671 log.go:172] (0xc000138840) Go away received\n" Feb 9 11:42:54.399: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:42:54.399: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:42:54.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 9 11:42:54.823: INFO: stderr: "I0209 11:42:54.607539 1692 log.go:172] (0xc0006f2370) (0xc000665360) Create stream\nI0209 11:42:54.607854 1692 log.go:172] (0xc0006f2370) (0xc000665360) Stream added, broadcasting: 1\nI0209 11:42:54.610772 1692 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0209 11:42:54.610805 1692 log.go:172] (0xc0006f2370) (0xc0005d0000) Create stream\nI0209 11:42:54.610813 1692 log.go:172] (0xc0006f2370) (0xc0005d0000) Stream added, broadcasting: 3\nI0209 11:42:54.611552 1692 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0209 11:42:54.611570 1692 log.go:172] (0xc0006f2370) (0xc0005d00a0) Create stream\nI0209 11:42:54.611577 1692 log.go:172] (0xc0006f2370) (0xc0005d00a0) Stream added, broadcasting: 5\nI0209 11:42:54.612240 1692 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0209 11:42:54.711406 1692 log.go:172] (0xc0006f2370) Data frame received for 3\nI0209 11:42:54.711483 1692 log.go:172] (0xc0005d0000) (3) Data frame handling\nI0209 11:42:54.711504 1692 log.go:172] (0xc0005d0000) (3) Data frame sent\nI0209 11:42:54.813254 1692 log.go:172] (0xc0006f2370) (0xc0005d0000) Stream removed, broadcasting: 3\nI0209 11:42:54.813329 1692 log.go:172] (0xc0006f2370) Data frame received for 1\nI0209 11:42:54.813353 1692 log.go:172] (0xc000665360) (1) Data frame handling\nI0209 11:42:54.813370 1692 log.go:172] (0xc000665360) (1) Data frame sent\nI0209 11:42:54.813393 1692 log.go:172] (0xc0006f2370) (0xc000665360) Stream removed, broadcasting: 1\nI0209 11:42:54.813453 1692 log.go:172] (0xc0006f2370) (0xc0005d00a0) Stream removed, broadcasting: 5\nI0209 11:42:54.813491 1692 log.go:172] (0xc0006f2370) Go away received\nI0209 11:42:54.813933 1692 log.go:172] (0xc0006f2370) (0xc000665360) Stream removed, broadcasting: 1\nI0209 11:42:54.813943 1692 log.go:172] (0xc0006f2370) (0xc0005d0000) Stream removed, broadcasting: 3\nI0209 11:42:54.813948 1692 log.go:172] (0xc0006f2370) (0xc0005d00a0) Stream removed, broadcasting: 5\n" Feb 9 11:42:54.823: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 9 11:42:54.823: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 9 11:42:54.823: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:42:54.857: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 9 11:43:04.927: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:43:04.928: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:43:04.928: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 9 11:43:05.002: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:05.002: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:05.002: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:05.002: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:05.002: INFO: Feb 9 11:43:05.002: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:07.734: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:07.735: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:07.735: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:07.735: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:07.735: INFO: Feb 9 11:43:07.735: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:08.760: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:08.760: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:08.760: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:08.760: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:08.760: INFO: Feb 9 11:43:08.760: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:10.708: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:10.708: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:10.709: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:10.709: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:10.709: INFO: Feb 9 11:43:10.709: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:11.776: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:11.776: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:11.776: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:11.776: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:11.776: INFO: Feb 9 11:43:11.776: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:12.869: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:12.869: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:12.869: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:12.869: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:12.869: INFO: Feb 9 11:43:12.869: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:13.893: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:13.893: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:13.893: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:13.893: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:13.893: INFO: Feb 9 11:43:13.893: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 9 11:43:14.923: INFO: POD NODE PHASE GRACE CONDITIONS Feb 9 11:43:14.924: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:06 +0000 UTC }] Feb 9 11:43:14.924: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:14.924: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:42:27 +0000 UTC }] Feb 9 11:43:14.924: INFO: Feb 9 11:43:14.924: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xfk4t Feb 9 11:43:15.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:43:16.163: INFO: rc: 1 Feb 9 11:43:16.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001417b60 exit status 1 true [0xc0009db328 0xc0009db3e8 0xc0009db408] [0xc0009db328 0xc0009db3e8 0xc0009db408] [0xc0009db3b0 0xc0009db400] [0x935700 0x935700] 0xc001e4ad80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 9 11:43:26.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:43:26.333: INFO: rc: 1 Feb 9 11:43:26.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a0e570 exit status 1 true [0xc001b6a098 0xc001b6a0b0 0xc001b6a0c8] [0xc001b6a098 0xc001b6a0b0 0xc001b6a0c8] [0xc001b6a0a8 0xc001b6a0c0] [0x935700 0x935700] 0xc002293800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:43:36.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:43:36.514: INFO: rc: 1 Feb 9 11:43:36.514: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b34510 exit status 1 true [0xc00000ee50 0xc00000ee78 0xc00000ee90] [0xc00000ee50 0xc00000ee78 0xc00000ee90] [0xc00000ee70 0xc00000ee88] [0x935700 0x935700] 0xc001cd1da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:43:46.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:43:46.649: INFO: rc: 1 Feb 9 11:43:46.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018cc780 exit status 1 true [0xc0001704f8 0xc000170560 0xc0001705f0] [0xc0001704f8 0xc000170560 0xc0001705f0] [0xc000170520 0xc0001705c0] [0x935700 0x935700] 0xc0017c4ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:43:56.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:43:56.790: INFO: rc: 1 Feb 9 11:43:56.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b34660 exit status 1 true [0xc00000ee98 0xc00000eeb8 0xc00000ef10] [0xc00000ee98 0xc00000eeb8 0xc00000ef10] [0xc00000eeb0 0xc00000eee8] [0x935700 0x935700] 0xc001f68060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:06.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:07.012: INFO: rc: 1 Feb 9 11:44:07.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001417ce0 exit status 1 true [0xc0009db418 0xc0009db458 0xc0009db4b0] [0xc0009db418 0xc0009db458 0xc0009db4b0] [0xc0009db440 0xc0009db488] [0x935700 0x935700] 0xc001e4b020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:17.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:17.252: INFO: rc: 1 Feb 9 11:44:17.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000adc060 exit status 1 true [0xc0009db4c8 0xc0009db558 0xc0009db5d8] [0xc0009db4c8 0xc0009db558 0xc0009db5d8] [0xc0009db4f0 0xc0009db5b8] [0x935700 0x935700] 0xc001e4b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:27.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:27.483: INFO: rc: 1 Feb 9 11:44:27.484: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000adc240 exit status 1 true [0xc0009db5e8 0xc0009db660 0xc0009db688] [0xc0009db5e8 0xc0009db660 0xc0009db688] [0xc0009db648 0xc0009db678] [0x935700 0x935700] 0xc001e4b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:37.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:37.674: INFO: rc: 1 Feb 9 11:44:37.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b34ab0 exit status 1 true [0xc00000ef18 0xc00000efc0 0xc00000f050] [0xc00000ef18 0xc00000efc0 0xc00000f050] [0xc00000ef70 0xc00000f020] [0x935700 0x935700] 0xc001f68300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:47.860: INFO: rc: 1 Feb 9 11:44:47.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a0e6c0 exit status 1 true [0xc001b6a0d0 0xc001b6a0e8 0xc001b6a100] [0xc001b6a0d0 0xc001b6a0e8 0xc001b6a100] [0xc001b6a0e0 0xc001b6a0f8] [0x935700 0x935700] 0xc002293aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:44:57.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:44:58.051: INFO: rc: 1 Feb 9 11:44:58.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013581b0 exit status 1 true [0xc00000e1f8 0xc00000ebf0 0xc00000ec40] [0xc00000e1f8 0xc00000ebf0 0xc00000ec40] [0xc00000ebe0 0xc00000ec20] [0x935700 0x935700] 0xc001cd1080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:08.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:08.171: INFO: rc: 1 Feb 9 11:45:08.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001358360 exit status 1 true [0xc00000ec68 0xc00000ece0 0xc00000ed50] [0xc00000ec68 0xc00000ece0 0xc00000ed50] [0xc00000ecd0 0xc00000ed38] [0x935700 0x935700] 0xc001cd1320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:18.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:18.295: INFO: rc: 1 Feb 9 11:45:18.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001358480 exit status 1 true [0xc00000ed58 0xc00000eda8 0xc00000edf8] [0xc00000ed58 0xc00000eda8 0xc00000edf8] [0xc00000ed78 0xc00000edf0] [0x935700 0x935700] 0xc001cd15c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:28.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:28.401: INFO: rc: 1 Feb 9 11:45:28.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013585a0 exit status 1 true [0xc00000ee08 0xc00000ee28 0xc00000ee68] [0xc00000ee08 0xc00000ee28 0xc00000ee68] [0xc00000ee20 0xc00000ee50] [0x935700 0x935700] 0xc001cd1860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:38.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:38.630: INFO: rc: 1 Feb 9 11:45:38.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013586c0 exit status 1 true [0xc00000ee70 0xc00000ee88 0xc00000eea0] [0xc00000ee70 0xc00000ee88 0xc00000eea0] [0xc00000ee80 0xc00000ee98] [0x935700 0x935700] 0xc001cd1b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:48.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:48.795: INFO: rc: 1 Feb 9 11:45:48.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d741e0 exit status 1 true [0xc000170038 0xc000170210 0xc000170258] [0xc000170038 0xc000170210 0xc000170258] [0xc0001701d8 0xc000170248] [0x935700 0x935700] 0xc001f681e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:45:58.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:45:58.969: INFO: rc: 1 Feb 9 11:45:58.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013587e0 exit status 1 true [0xc00000eeb0 0xc00000eee8 0xc00000ef28] [0xc00000eeb0 0xc00000eee8 0xc00000ef28] [0xc00000eed0 0xc00000ef18] [0x935700 0x935700] 0xc001cd1da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:08.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:09.084: INFO: rc: 1 Feb 9 11:46:09.084: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001416180 exit status 1 true [0xc0009da060 0xc0009da300 0xc0009da648] [0xc0009da060 0xc0009da300 0xc0009da648] [0xc0009da2c8 0xc0009da538] [0x935700 0x935700] 0xc0017c4240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:19.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:19.228: INFO: rc: 1 Feb 9 11:46:19.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001358930 exit status 1 true [0xc00000ef70 0xc00000f020 0xc00000f138] [0xc00000ef70 0xc00000f020 0xc00000f138] [0xc00000eff0 0xc00000f0e0] [0x935700 0x935700] 0xc001e4a060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:29.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:29.402: INFO: rc: 1 Feb 9 11:46:29.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001358a50 exit status 1 true [0xc00000f170 0xc00000f218 0xc00000f2b0] [0xc00000f170 0xc00000f218 0xc00000f2b0] [0xc00000f1e0 0xc00000f2a0] [0x935700 0x935700] 0xc001e4a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:39.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:39.544: INFO: rc: 1 Feb 9 11:46:39.544: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001359230 exit status 1 true [0xc00000f2b8 0xc00000f348 0xc00000f3c8] [0xc00000f2b8 0xc00000f348 0xc00000f3c8] [0xc00000f308 0xc00000f380] [0x935700 0x935700] 0xc001e4a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:49.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:49.699: INFO: rc: 1 Feb 9 11:46:49.699: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001416300 exit status 1 true [0xc0009da740 0xc0009da8d0 0xc0009da9c0] [0xc0009da740 0xc0009da8d0 0xc0009da9c0] [0xc0009da7a0 0xc0009da980] [0x935700 0x935700] 0xc0017c44e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:46:59.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:46:59.870: INFO: rc: 1 Feb 9 11:46:59.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001359440 exit status 1 true [0xc00000f3e8 0xc00000f460 0xc00000f4c0] [0xc00000f3e8 0xc00000f460 0xc00000f4c0] [0xc00000f450 0xc00000f480] [0x935700 0x935700] 0xc001e4a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:47:09.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:47:10.046: INFO: rc: 1 Feb 9 11:47:10.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013581e0 exit status 1 true [0xc00016e000 0xc0001701d8 0xc000170248] [0xc00016e000 0xc0001701d8 0xc000170248] [0xc0001700d8 0xc000170238] [0x935700 0x935700] 0xc001cd1080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:47:20.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:47:20.177: INFO: rc: 1 Feb 9 11:47:20.177: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013583c0 exit status 1 true [0xc000170258 0xc0001702a8 0xc000170318] [0xc000170258 0xc0001702a8 0xc000170318] [0xc000170270 0xc000170308] [0x935700 0x935700] 0xc001cd1320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:47:30.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:47:30.348: INFO: rc: 1 Feb 9 11:47:30.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001416120 exit status 1 true [0xc0009da060 0xc0009da300 0xc0009da648] [0xc0009da060 0xc0009da300 0xc0009da648] [0xc0009da2c8 0xc0009da538] [0x935700 0x935700] 0xc001e4a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:47:40.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:47:40.588: INFO: rc: 1 Feb 9 11:47:40.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014162a0 exit status 1 true [0xc0009da740 0xc0009da8d0 0xc0009da9c0] [0xc0009da740 0xc0009da8d0 0xc0009da9c0] [0xc0009da7a0 0xc0009da980] [0x935700 0x935700] 0xc001e4a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:47:50.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:47:50.725: INFO: rc: 1 Feb 9 11:47:50.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b34180 exit status 1 true [0xc00000e1f8 0xc00000ebf0 0xc00000ec40] [0xc00000e1f8 0xc00000ebf0 0xc00000ec40] [0xc00000ebe0 0xc00000ec20] [0x935700 0x935700] 0xc0017c4240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:48:00.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:48:00.896: INFO: rc: 1 Feb 9 11:48:00.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b342d0 exit status 1 true [0xc00000ec68 0xc00000ece0 0xc00000ed50] [0xc00000ec68 0xc00000ece0 0xc00000ed50] [0xc00000ecd0 0xc00000ed38] [0x935700 0x935700] 0xc0017c44e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:48:10.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:48:11.034: INFO: rc: 1 Feb 9 11:48:11.035: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b34450 exit status 1 true [0xc00000ed58 0xc00000eda8 0xc00000edf8] [0xc00000ed58 0xc00000eda8 0xc00000edf8] [0xc00000ed78 0xc00000edf0] [0x935700 0x935700] 0xc0017c4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 9 11:48:21.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xfk4t ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 9 11:48:21.187: INFO: rc: 1 Feb 9 11:48:21.187: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 9 11:48:21.187: INFO: Scaling statefulset ss to 0 Feb 9 11:48:21.210: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 9 11:48:21.214: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xfk4t Feb 9 11:48:21.217: INFO: Scaling statefulset ss to 0 Feb 9 11:48:21.239: INFO: Waiting for statefulset status.replicas updated to 0 Feb 9 11:48:21.243: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:48:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xfk4t" for this suite. Feb 9 11:48:29.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:48:29.372: INFO: namespace: e2e-tests-statefulset-xfk4t, resource: bindings, ignored listing per whitelist Feb 9 11:48:29.526: INFO: namespace e2e-tests-statefulset-xfk4t deletion completed in 8.231782339s • [SLOW TEST:383.643 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:48:29.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gswgx STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 9 11:48:29.709: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 9 11:49:02.019: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-gswgx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 9 11:49:02.019: INFO: >>> kubeConfig: /root/.kube/config I0209 11:49:02.108329 8 log.go:172] (0xc0000eb550) (0xc0006485a0) Create stream I0209 11:49:02.108538 8 log.go:172] (0xc0000eb550) (0xc0006485a0) Stream added, broadcasting: 1 I0209 11:49:02.114826 8 log.go:172] (0xc0000eb550) Reply frame received for 1 I0209 11:49:02.114864 8 log.go:172] (0xc0000eb550) (0xc0014cc000) Create stream I0209 11:49:02.114876 8 log.go:172] (0xc0000eb550) (0xc0014cc000) Stream added, broadcasting: 3 I0209 11:49:02.115911 8 log.go:172] (0xc0000eb550) Reply frame received for 3 I0209 11:49:02.115935 8 log.go:172] (0xc0000eb550) (0xc000648820) Create stream I0209 11:49:02.115943 8 log.go:172] (0xc0000eb550) (0xc000648820) Stream added, broadcasting: 5 I0209 11:49:02.116962 8 log.go:172] (0xc0000eb550) Reply frame received for 5 I0209 11:49:02.296715 8 log.go:172] (0xc0000eb550) Data frame received for 3 I0209 11:49:02.296880 8 log.go:172] (0xc0014cc000) (3) Data frame handling I0209 11:49:02.296948 8 log.go:172] (0xc0014cc000) (3) Data frame sent I0209 11:49:02.443158 8 log.go:172] (0xc0000eb550) (0xc0014cc000) Stream removed, broadcasting: 3 I0209 11:49:02.443326 8 log.go:172] (0xc0000eb550) Data frame received for 1 I0209 11:49:02.443369 8 log.go:172] (0xc0000eb550) (0xc000648820) Stream removed, broadcasting: 5 I0209 11:49:02.443399 8 log.go:172] (0xc0006485a0) (1) Data frame handling I0209 11:49:02.443419 8 log.go:172] (0xc0006485a0) (1) Data frame sent I0209 11:49:02.443427 8 log.go:172] (0xc0000eb550) (0xc0006485a0) Stream removed, broadcasting: 1 I0209 11:49:02.443437 8 log.go:172] (0xc0000eb550) Go away received I0209 11:49:02.443822 8 log.go:172] (0xc0000eb550) (0xc0006485a0) Stream removed, broadcasting: 1 I0209 11:49:02.443836 8 log.go:172] (0xc0000eb550) (0xc0014cc000) Stream removed, broadcasting: 3 I0209 11:49:02.443848 8 log.go:172] (0xc0000eb550) (0xc000648820) Stream removed, broadcasting: 5 Feb 9 11:49:02.443: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:49:02.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gswgx" for this suite. Feb 9 11:49:28.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:49:28.752: INFO: namespace: e2e-tests-pod-network-test-gswgx, resource: bindings, ignored listing per whitelist Feb 9 11:49:28.791: INFO: namespace e2e-tests-pod-network-test-gswgx deletion completed in 26.326505633s • [SLOW TEST:59.265 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:49:28.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 9 11:49:41.586: INFO: Successfully updated pod "pod-update-3a72c2f8-4b32-11ea-aa78-0242ac110005" STEP: verifying the updated pod is in kubernetes Feb 9 11:49:41.605: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:49:41.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ld6jl" for this suite. Feb 9 11:50:05.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:50:05.798: INFO: namespace: e2e-tests-pods-ld6jl, resource: bindings, ignored listing per whitelist Feb 9 11:50:05.915: INFO: namespace e2e-tests-pods-ld6jl deletion completed in 24.299978578s • [SLOW TEST:37.124 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:50:05.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:50:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-828d5" for this suite. Feb 9 11:50:12.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:50:12.440: INFO: namespace: e2e-tests-kubelet-test-828d5, resource: bindings, ignored listing per whitelist Feb 9 11:50:12.471: INFO: namespace e2e-tests-kubelet-test-828d5 deletion completed in 6.241478742s • [SLOW TEST:6.555 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:50:12.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-548e7386-4b32-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 11:50:12.834: INFO: Waiting up to 5m0s for pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-xnd7g" to be "success or failure" Feb 9 11:50:12.864: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.59074ms Feb 9 11:50:15.476: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.641384009s Feb 9 11:50:17.508: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673609509s Feb 9 11:50:19.840: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.006015252s Feb 9 11:50:21.863: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.028917188s Feb 9 11:50:23.886: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.05223023s STEP: Saw pod success Feb 9 11:50:23.887: INFO: Pod "pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:50:23.900: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 11:50:24.175: INFO: Waiting for pod pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005 to disappear Feb 9 11:50:24.203: INFO: Pod pod-secrets-5495dd48-4b32-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:50:24.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xnd7g" for this suite. Feb 9 11:50:30.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:50:30.576: INFO: namespace: e2e-tests-secrets-xnd7g, resource: bindings, ignored listing per whitelist Feb 9 11:50:30.690: INFO: namespace e2e-tests-secrets-xnd7g deletion completed in 6.474608133s • [SLOW TEST:18.218 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:50:30.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 9 11:50:30.842: INFO: Waiting up to 5m0s for pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005" in namespace "e2e-tests-var-expansion-5ndsh" to be "success or failure" Feb 9 11:50:30.857: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.100408ms Feb 9 11:50:32.882: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040053513s Feb 9 11:50:34.897: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055178189s Feb 9 11:50:37.319: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477736255s Feb 9 11:50:39.345: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503221258s Feb 9 11:50:41.362: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.5206112s Feb 9 11:50:43.897: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.055721258s STEP: Saw pod success Feb 9 11:50:43.898: INFO: Pod "var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:50:43.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 11:50:44.292: INFO: Waiting for pod var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005 to disappear Feb 9 11:50:44.304: INFO: Pod var-expansion-5f5229da-4b32-11ea-aa78-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:50:44.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-5ndsh" for this suite. Feb 9 11:50:50.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:50:50.663: INFO: namespace: e2e-tests-var-expansion-5ndsh, resource: bindings, ignored listing per whitelist Feb 9 11:50:50.663: INFO: namespace e2e-tests-var-expansion-5ndsh deletion completed in 6.339857671s • [SLOW TEST:19.973 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:50:50.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6b4af0f0-4b32-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:50:50.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-2v7vt" to be "success or failure" Feb 9 11:50:50.910: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.34314ms Feb 9 11:50:53.173: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278221914s Feb 9 11:50:55.298: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402803815s Feb 9 11:50:57.311: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416596825s Feb 9 11:50:59.327: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432647153s Feb 9 11:51:01.345: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.450189134s STEP: Saw pod success Feb 9 11:51:01.345: INFO: Pod "pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:51:01.353: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 9 11:51:01.510: INFO: Waiting for pod pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005 to disappear Feb 9 11:51:01.521: INFO: Pod pod-configmaps-6b4bdbd5-4b32-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:51:01.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2v7vt" for this suite. Feb 9 11:51:07.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:51:07.893: INFO: namespace: e2e-tests-configmap-2v7vt, resource: bindings, ignored listing per whitelist Feb 9 11:51:07.938: INFO: namespace e2e-tests-configmap-2v7vt deletion completed in 6.408898431s • [SLOW TEST:17.274 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:51:07.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-qtph STEP: Creating a pod to test atomic-volume-subpath Feb 9 11:51:08.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qtph" in namespace "e2e-tests-subpath-j6trf" to be "success or failure" Feb 9 11:51:08.178: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 23.846584ms Feb 9 11:51:10.191: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037170444s Feb 9 11:51:12.208: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053976391s Feb 9 11:51:14.802: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647872203s Feb 9 11:51:16.818: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663894514s Feb 9 11:51:18.832: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 10.677881558s Feb 9 11:51:20.873: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719361761s Feb 9 11:51:22.885: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 14.731605051s Feb 9 11:51:24.905: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 16.751748371s Feb 9 11:51:26.926: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 18.772350044s Feb 9 11:51:28.945: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 20.791541351s Feb 9 11:51:30.960: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 22.805841854s Feb 9 11:51:32.981: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 24.827797057s Feb 9 11:51:35.013: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 26.859642203s Feb 9 11:51:37.031: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 28.876945318s Feb 9 11:51:39.045: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 30.891254609s Feb 9 11:51:41.057: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Running", Reason="", readiness=false. Elapsed: 32.903001265s Feb 9 11:51:43.709: INFO: Pod "pod-subpath-test-projected-qtph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.555701253s STEP: Saw pod success Feb 9 11:51:43.710: INFO: Pod "pod-subpath-test-projected-qtph" satisfied condition "success or failure" Feb 9 11:51:43.717: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-qtph container test-container-subpath-projected-qtph: STEP: delete the pod Feb 9 11:51:44.137: INFO: Waiting for pod pod-subpath-test-projected-qtph to disappear Feb 9 11:51:44.169: INFO: Pod pod-subpath-test-projected-qtph no longer exists STEP: Deleting pod pod-subpath-test-projected-qtph Feb 9 11:51:44.169: INFO: Deleting pod "pod-subpath-test-projected-qtph" in namespace "e2e-tests-subpath-j6trf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:51:44.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-j6trf" for this suite. Feb 9 11:51:50.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:51:50.426: INFO: namespace: e2e-tests-subpath-j6trf, resource: bindings, ignored listing per whitelist Feb 9 11:51:50.652: INFO: namespace e2e-tests-subpath-j6trf deletion completed in 6.462016089s • [SLOW TEST:42.713 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:51:50.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:51:50.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-jr6z9" to be "success or failure" Feb 9 11:51:50.926: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.505824ms Feb 9 11:51:52.949: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03454673s Feb 9 11:51:54.966: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051539991s Feb 9 11:51:57.733: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818311572s Feb 9 11:51:59.752: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.837072706s Feb 9 11:52:01.765: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.850230508s STEP: Saw pod success Feb 9 11:52:01.765: INFO: Pod "downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:52:01.775: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:52:03.637: INFO: Waiting for pod downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005 to disappear Feb 9 11:52:03.652: INFO: Pod downwardapi-volume-8f0fa014-4b32-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:52:03.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jr6z9" for this suite. Feb 9 11:52:09.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:52:09.814: INFO: namespace: e2e-tests-projected-jr6z9, resource: bindings, ignored listing per whitelist Feb 9 11:52:09.886: INFO: namespace e2e-tests-projected-jr6z9 deletion completed in 6.225016151s • [SLOW TEST:19.233 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:52:09.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 9 11:52:22.933: INFO: Successfully updated pod "labelsupdate9a7de2a7-4b32-11ea-aa78-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:52:25.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gjnhs" for this suite. Feb 9 11:52:49.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:52:49.194: INFO: namespace: e2e-tests-projected-gjnhs, resource: bindings, ignored listing per whitelist Feb 9 11:52:49.267: INFO: namespace e2e-tests-projected-gjnhs deletion completed in 24.240914417s • [SLOW TEST:39.380 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:52:49.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 9 11:52:49.534: INFO: Waiting up to 5m0s for pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-lpc86" to be "success or failure" Feb 9 11:52:49.561: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.598648ms Feb 9 11:52:51.618: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083691823s Feb 9 11:52:53.639: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105417298s Feb 9 11:52:55.661: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126705378s Feb 9 11:52:57.907: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373356278s Feb 9 11:52:59.938: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.404183878s Feb 9 11:53:01.959: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.424874719s STEP: Saw pod success Feb 9 11:53:01.959: INFO: Pod "pod-b1ff9065-4b32-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:53:01.969: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b1ff9065-4b32-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 11:53:02.250: INFO: Waiting for pod pod-b1ff9065-4b32-11ea-aa78-0242ac110005 to disappear Feb 9 11:53:02.310: INFO: Pod pod-b1ff9065-4b32-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:53:02.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lpc86" for this suite. Feb 9 11:53:08.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:53:08.898: INFO: namespace: e2e-tests-emptydir-lpc86, resource: bindings, ignored listing per whitelist Feb 9 11:53:08.924: INFO: namespace e2e-tests-emptydir-lpc86 deletion completed in 6.47841373s • [SLOW TEST:19.657 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:53:08.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:53:09.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-d4tzr" for this suite. Feb 9 11:53:15.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:53:15.274: INFO: namespace: e2e-tests-services-d4tzr, resource: bindings, ignored listing per whitelist Feb 9 11:53:15.426: INFO: namespace e2e-tests-services-d4tzr deletion completed in 6.288844059s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.501 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:53:15.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0209 11:53:29.837413 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 9 11:53:29.837: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:53:29.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-96h2v" for this suite. Feb 9 11:53:55.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:53:55.293: INFO: namespace: e2e-tests-gc-96h2v, resource: bindings, ignored listing per whitelist Feb 9 11:53:55.337: INFO: namespace e2e-tests-gc-96h2v deletion completed in 24.778437431s • [SLOW TEST:39.911 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:53:55.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 9 11:53:58.359: INFO: mount service account has no secret references STEP: getting the auto-created API token Feb 9 11:54:00.384: INFO: created pod pod-service-account-defaultsa Feb 9 11:54:00.384: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 9 11:54:00.700: INFO: created pod pod-service-account-mountsa Feb 9 11:54:00.701: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 9 11:54:00.730: INFO: created pod pod-service-account-nomountsa Feb 9 11:54:00.730: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 9 11:54:00.881: INFO: created pod pod-service-account-defaultsa-mountspec Feb 9 11:54:00.881: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 9 11:54:00.904: INFO: created pod pod-service-account-mountsa-mountspec Feb 9 11:54:00.904: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 9 11:54:00.929: INFO: created pod pod-service-account-nomountsa-mountspec Feb 9 11:54:00.929: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 9 11:54:01.012: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 9 11:54:01.012: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 9 11:54:01.059: INFO: created pod pod-service-account-mountsa-nomountspec Feb 9 11:54:01.059: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 9 11:54:01.115: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 9 11:54:01.116: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:54:01.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-94cbg" for this suite. Feb 9 11:54:30.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:54:30.689: INFO: namespace: e2e-tests-svcaccounts-94cbg, resource: bindings, ignored listing per whitelist Feb 9 11:54:30.719: INFO: namespace e2e-tests-svcaccounts-94cbg deletion completed in 29.50256955s • [SLOW TEST:35.381 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:54:30.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 9 11:54:53.344: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 9 11:54:53.387: INFO: Pod pod-with-poststart-http-hook still exists Feb 9 11:54:55.387: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 9 11:54:55.401: INFO: Pod pod-with-poststart-http-hook still exists Feb 9 11:54:57.388: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 9 11:54:57.442: INFO: Pod pod-with-poststart-http-hook still exists Feb 9 11:54:59.387: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 9 11:54:59.447: INFO: Pod pod-with-poststart-http-hook still exists Feb 9 11:55:01.387: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 9 11:55:01.504: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:55:01.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ghdzh" for this suite. Feb 9 11:55:25.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:55:25.652: INFO: namespace: e2e-tests-container-lifecycle-hook-ghdzh, resource: bindings, ignored listing per whitelist Feb 9 11:55:25.769: INFO: namespace e2e-tests-container-lifecycle-hook-ghdzh deletion completed in 24.24874843s • [SLOW TEST:55.050 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:55:25.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:55:25.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-cz4dz" to be "success or failure" Feb 9 11:55:25.927: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.334075ms Feb 9 11:55:28.099: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184566545s Feb 9 11:55:30.117: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202930139s Feb 9 11:55:32.138: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223505602s Feb 9 11:55:34.158: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243973613s Feb 9 11:55:36.204: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289502206s STEP: Saw pod success Feb 9 11:55:36.204: INFO: Pod "downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:55:36.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:55:36.473: INFO: Waiting for pod downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005 to disappear Feb 9 11:55:36.485: INFO: Pod downwardapi-volume-0f395a2d-4b33-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:55:36.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cz4dz" for this suite. Feb 9 11:55:42.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:55:42.591: INFO: namespace: e2e-tests-projected-cz4dz, resource: bindings, ignored listing per whitelist Feb 9 11:55:42.762: INFO: namespace e2e-tests-projected-cz4dz deletion completed in 6.265383436s • [SLOW TEST:16.993 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:55:42.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 9 11:55:43.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-27zds' Feb 9 11:55:45.064: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 9 11:55:45.065: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 9 11:55:45.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-27zds' Feb 9 11:55:45.546: INFO: stderr: "" Feb 9 11:55:45.546: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:55:45.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-27zds" for this suite. Feb 9 11:55:54.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:55:54.965: INFO: namespace: e2e-tests-kubectl-27zds, resource: bindings, ignored listing per whitelist Feb 9 11:55:54.984: INFO: namespace e2e-tests-kubectl-27zds deletion completed in 9.422072589s • [SLOW TEST:12.221 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:55:54.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-20ce9fa4-4b33-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 11:55:55.459: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-hjg95" to be "success or failure" Feb 9 11:55:55.604: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.733974ms Feb 9 11:55:57.816: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357180896s Feb 9 11:55:59.832: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372664724s Feb 9 11:56:02.095: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635928917s Feb 9 11:56:04.303: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844284615s Feb 9 11:56:06.518: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.058474435s STEP: Saw pod success Feb 9 11:56:06.518: INFO: Pod "pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:56:06.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 9 11:56:06.728: INFO: Waiting for pod pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005 to disappear Feb 9 11:56:06.795: INFO: Pod pod-projected-configmaps-20d09e88-4b33-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:56:06.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hjg95" for this suite. Feb 9 11:56:12.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:56:12.962: INFO: namespace: e2e-tests-projected-hjg95, resource: bindings, ignored listing per whitelist Feb 9 11:56:13.008: INFO: namespace e2e-tests-projected-hjg95 deletion completed in 6.200344962s • [SLOW TEST:18.023 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:56:13.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:57:13.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rclks" for this suite. Feb 9 11:57:37.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:57:37.429: INFO: namespace: e2e-tests-container-probe-rclks, resource: bindings, ignored listing per whitelist Feb 9 11:57:37.491: INFO: namespace e2e-tests-container-probe-rclks deletion completed in 24.237017248s • [SLOW TEST:84.483 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:57:37.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0209 11:57:48.267568 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 9 11:57:48.267: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:57:48.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-prvl5" for this suite. Feb 9 11:57:55.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:57:55.376: INFO: namespace: e2e-tests-gc-prvl5, resource: bindings, ignored listing per whitelist Feb 9 11:57:55.514: INFO: namespace e2e-tests-gc-prvl5 deletion completed in 7.24010024s • [SLOW TEST:18.022 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:57:55.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-68894bbf-4b33-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 11:57:55.879: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-wqcf8" to be "success or failure" Feb 9 11:57:55.899: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.542007ms Feb 9 11:57:57.924: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044361924s Feb 9 11:57:59.949: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069146873s Feb 9 11:58:02.163: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283728246s Feb 9 11:58:04.180: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300471856s Feb 9 11:58:06.194: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.314305461s STEP: Saw pod success Feb 9 11:58:06.194: INFO: Pod "pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:58:06.198: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 9 11:58:06.260: INFO: Waiting for pod pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005 to disappear Feb 9 11:58:06.274: INFO: Pod pod-projected-secrets-688dbc68-4b33-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:58:06.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wqcf8" for this suite. Feb 9 11:58:14.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:58:14.633: INFO: namespace: e2e-tests-projected-wqcf8, resource: bindings, ignored listing per whitelist Feb 9 11:58:14.698: INFO: namespace e2e-tests-projected-wqcf8 deletion completed in 8.404735747s • [SLOW TEST:19.185 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:58:14.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-27zss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-27zss to expose endpoints map[] Feb 9 11:58:15.162: INFO: Get endpoints failed (9.897985ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 9 11:58:16.174: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-27zss exposes endpoints map[] (1.022773489s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-27zss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-27zss to expose endpoints map[pod1:[80]] Feb 9 11:58:20.658: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.449984247s elapsed, will retry) Feb 9 11:58:26.708: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-27zss exposes endpoints map[pod1:[80]] (10.500646957s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-27zss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-27zss to expose endpoints map[pod1:[80] pod2:[80]] Feb 9 11:58:31.011: INFO: Unexpected endpoints: found map[74b7a494-4b33-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.293732059s elapsed, will retry) Feb 9 11:58:36.371: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-27zss exposes endpoints map[pod1:[80] pod2:[80]] (9.653301613s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-27zss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-27zss to expose endpoints map[pod2:[80]] Feb 9 11:58:37.578: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-27zss exposes endpoints map[pod2:[80]] (1.200075833s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-27zss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-27zss to expose endpoints map[] Feb 9 11:58:38.768: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-27zss exposes endpoints map[] (1.170391036s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:58:38.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-27zss" for this suite. Feb 9 11:59:03.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:59:03.250: INFO: namespace: e2e-tests-services-27zss, resource: bindings, ignored listing per whitelist Feb 9 11:59:03.270: INFO: namespace e2e-tests-services-27zss deletion completed in 24.299855493s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.571 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:59:03.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 11:59:03.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-9xlc9" to be "success or failure" Feb 9 11:59:03.509: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725102ms Feb 9 11:59:05.526: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025253536s Feb 9 11:59:07.538: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037650662s Feb 9 11:59:09.714: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212976451s Feb 9 11:59:11.747: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24675209s Feb 9 11:59:13.762: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261637242s Feb 9 11:59:15.782: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.281435211s STEP: Saw pod success Feb 9 11:59:15.782: INFO: Pod "downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 11:59:15.787: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 11:59:16.956: INFO: Waiting for pod downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005 to disappear Feb 9 11:59:17.164: INFO: Pod downwardapi-volume-90dfa24f-4b33-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 11:59:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9xlc9" for this suite. Feb 9 11:59:23.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 11:59:23.405: INFO: namespace: e2e-tests-downward-api-9xlc9, resource: bindings, ignored listing per whitelist Feb 9 11:59:23.434: INFO: namespace e2e-tests-downward-api-9xlc9 deletion completed in 6.255024714s • [SLOW TEST:20.164 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 11:59:23.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 11:59:23.731: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 9 11:59:23.762: INFO: Number of nodes with available pods: 0 Feb 9 11:59:23.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:24.789: INFO: Number of nodes with available pods: 0 Feb 9 11:59:24.789: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:25.794: INFO: Number of nodes with available pods: 0 Feb 9 11:59:25.794: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:26.786: INFO: Number of nodes with available pods: 0 Feb 9 11:59:26.786: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:27.823: INFO: Number of nodes with available pods: 0 Feb 9 11:59:27.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:28.839: INFO: Number of nodes with available pods: 0 Feb 9 11:59:28.840: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:29.908: INFO: Number of nodes with available pods: 0 Feb 9 11:59:29.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:30.781: INFO: Number of nodes with available pods: 0 Feb 9 11:59:30.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:31.864: INFO: Number of nodes with available pods: 0 Feb 9 11:59:31.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:32.776: INFO: Number of nodes with available pods: 1 Feb 9 11:59:32.776: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 9 11:59:32.887: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:33.931: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:34.932: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:35.934: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:37.071: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:37.920: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:39.008: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:39.962: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:40.928: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:40.928: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:41.924: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:41.924: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:42.929: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:42.929: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:43.939: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:43.939: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:44.932: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:44.932: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:45.942: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:45.942: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:46.972: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:46.972: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:48.252: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:48.252: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:48.922: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:48.922: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:49.948: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:49.948: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:50.944: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:50.944: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:51.928: INFO: Wrong image for pod: daemon-set-vxcr5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 9 11:59:51.928: INFO: Pod daemon-set-vxcr5 is not available Feb 9 11:59:52.923: INFO: Pod daemon-set-6csqd is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 9 11:59:53.050: INFO: Number of nodes with available pods: 0 Feb 9 11:59:53.050: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:54.175: INFO: Number of nodes with available pods: 0 Feb 9 11:59:54.175: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:55.088: INFO: Number of nodes with available pods: 0 Feb 9 11:59:55.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:56.107: INFO: Number of nodes with available pods: 0 Feb 9 11:59:56.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:57.082: INFO: Number of nodes with available pods: 0 Feb 9 11:59:57.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:58.325: INFO: Number of nodes with available pods: 0 Feb 9 11:59:58.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 11:59:59.392: INFO: Number of nodes with available pods: 0 Feb 9 11:59:59.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 12:00:00.071: INFO: Number of nodes with available pods: 0 Feb 9 12:00:00.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 12:00:01.104: INFO: Number of nodes with available pods: 0 Feb 9 12:00:01.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 12:00:02.068: INFO: Number of nodes with available pods: 0 Feb 9 12:00:02.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 9 12:00:03.068: INFO: Number of nodes with available pods: 1 Feb 9 12:00:03.068: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nxgqb, will wait for the garbage collector to delete the pods Feb 9 12:00:03.210: INFO: Deleting DaemonSet.extensions daemon-set took: 63.992725ms Feb 9 12:00:03.510: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.705795ms Feb 9 12:00:22.655: INFO: Number of nodes with available pods: 0 Feb 9 12:00:22.656: INFO: Number of running nodes: 0, number of available pods: 0 Feb 9 12:00:22.666: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nxgqb/daemonsets","resourceVersion":"21084033"},"items":null} Feb 9 12:00:22.674: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nxgqb/pods","resourceVersion":"21084033"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:00:22.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-nxgqb" for this suite. Feb 9 12:00:30.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:00:30.873: INFO: namespace: e2e-tests-daemonsets-nxgqb, resource: bindings, ignored listing per whitelist Feb 9 12:00:30.957: INFO: namespace e2e-tests-daemonsets-nxgqb deletion completed in 8.242686389s • [SLOW TEST:67.522 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:00:30.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 12:00:31.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-ps42b" to be "success or failure" Feb 9 12:00:31.183: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80892ms Feb 9 12:00:33.227: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053075314s Feb 9 12:00:35.350: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175922268s Feb 9 12:00:37.516: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341209095s Feb 9 12:00:39.533: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358778885s Feb 9 12:00:41.547: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.372644575s STEP: Saw pod success Feb 9 12:00:41.547: INFO: Pod "downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:00:41.553: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 12:00:41.688: INFO: Waiting for pod downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005 to disappear Feb 9 12:00:41.814: INFO: Pod downwardapi-volume-c52bd070-4b33-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:00:41.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ps42b" for this suite. Feb 9 12:00:49.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:00:50.031: INFO: namespace: e2e-tests-downward-api-ps42b, resource: bindings, ignored listing per whitelist Feb 9 12:00:50.050: INFO: namespace e2e-tests-downward-api-ps42b deletion completed in 8.217319682s • [SLOW TEST:19.093 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:00:50.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 9 12:00:50.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5tgdk' Feb 9 12:00:50.607: INFO: stderr: "" Feb 9 12:00:50.608: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 9 12:01:00.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5tgdk -o json' Feb 9 12:01:00.818: INFO: stderr: "" Feb 9 12:01:00.818: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-09T12:00:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-5tgdk\",\n \"resourceVersion\": \"21084131\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-5tgdk/pods/e2e-test-nginx-pod\",\n \"uid\": \"d0b8df20-4b33-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j6sw8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j6sw8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j6sw8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-09T12:00:51Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-09T12:01:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-09T12:01:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-09T12:00:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://5184754bc41a07001d318f38f011f78f715232f573fb1cd3a06ccde986fe070c\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-09T12:00:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-09T12:00:51Z\"\n }\n}\n" STEP: replace the image in the pod Feb 9 12:01:00.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-5tgdk' Feb 9 12:01:01.316: INFO: stderr: "" Feb 9 12:01:01.316: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 9 12:01:01.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5tgdk' Feb 9 12:01:11.917: INFO: stderr: "" Feb 9 12:01:11.917: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:01:11.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5tgdk" for this suite. Feb 9 12:01:17.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:01:17.998: INFO: namespace: e2e-tests-kubectl-5tgdk, resource: bindings, ignored listing per whitelist Feb 9 12:01:18.102: INFO: namespace e2e-tests-kubectl-5tgdk deletion completed in 6.16856091s • [SLOW TEST:28.052 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:01:18.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-e14f6740-4b33-11ea-aa78-0242ac110005 STEP: Creating secret with name s-test-opt-upd-e14f6853-4b33-11ea-aa78-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e14f6740-4b33-11ea-aa78-0242ac110005 STEP: Updating secret s-test-opt-upd-e14f6853-4b33-11ea-aa78-0242ac110005 STEP: Creating secret with name s-test-opt-create-e14f6873-4b33-11ea-aa78-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:01:36.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xjcdg" for this suite. Feb 9 12:02:00.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:02:00.922: INFO: namespace: e2e-tests-projected-xjcdg, resource: bindings, ignored listing per whitelist Feb 9 12:02:01.004: INFO: namespace e2e-tests-projected-xjcdg deletion completed in 24.252224017s • [SLOW TEST:42.902 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:02:01.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:02:01.389: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 9 12:02:06.506: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 9 12:02:12.543: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 9 12:02:12.679: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6hg9j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6hg9j/deployments/test-cleanup-deployment,UID:01a6f99c-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084299,Generation:1,CreationTimestamp:2020-02-09 12:02:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 9 12:02:12.683: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:02:12.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6hg9j" for this suite. Feb 9 12:02:20.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:02:20.865: INFO: namespace: e2e-tests-deployment-6hg9j, resource: bindings, ignored listing per whitelist Feb 9 12:02:21.139: INFO: namespace e2e-tests-deployment-6hg9j deletion completed in 8.358845137s • [SLOW TEST:20.130 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:02:21.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:02:48.730: INFO: Container started at 2020-02-09 12:02:30 +0000 UTC, pod became ready at 2020-02-09 12:02:47 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:02:48.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-g6f7n" for this suite. Feb 9 12:03:12.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:03:12.919: INFO: namespace: e2e-tests-container-probe-g6f7n, resource: bindings, ignored listing per whitelist Feb 9 12:03:12.981: INFO: namespace e2e-tests-container-probe-g6f7n deletion completed in 24.194081673s • [SLOW TEST:51.842 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:03:12.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 9 12:03:13.761: INFO: Waiting up to 5m0s for pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn" in namespace "e2e-tests-svcaccounts-4slkk" to be "success or failure" Feb 9 12:03:13.871: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 109.694252ms Feb 9 12:03:15.940: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178098455s Feb 9 12:03:17.963: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201158095s Feb 9 12:03:20.354: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592134275s Feb 9 12:03:22.384: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622895245s Feb 9 12:03:24.514: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.752606047s Feb 9 12:03:26.551: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.78942267s Feb 9 12:03:28.592: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.8303384s Feb 9 12:03:30.626: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.863989538s STEP: Saw pod success Feb 9 12:03:30.626: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn" satisfied condition "success or failure" Feb 9 12:03:30.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn container token-test: STEP: delete the pod Feb 9 12:03:31.552: INFO: Waiting for pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn to disappear Feb 9 12:03:31.994: INFO: Pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-s5trn no longer exists STEP: Creating a pod to test consume service account root CA Feb 9 12:03:32.023: INFO: Waiting up to 5m0s for pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts" in namespace "e2e-tests-svcaccounts-4slkk" to be "success or failure" Feb 9 12:03:32.095: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 72.063887ms Feb 9 12:03:34.131: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108728541s Feb 9 12:03:36.142: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119167445s Feb 9 12:03:38.286: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263126097s Feb 9 12:03:40.298: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275184973s Feb 9 12:03:42.321: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 10.298024104s Feb 9 12:03:44.503: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 12.480194108s Feb 9 12:03:46.799: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Pending", Reason="", readiness=false. Elapsed: 14.776304156s Feb 9 12:03:48.808: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.78557403s STEP: Saw pod success Feb 9 12:03:48.808: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts" satisfied condition "success or failure" Feb 9 12:03:48.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts container root-ca-test: STEP: delete the pod Feb 9 12:03:49.644: INFO: Waiting for pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts to disappear Feb 9 12:03:50.050: INFO: Pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-dbnts no longer exists STEP: Creating a pod to test consume service account namespace Feb 9 12:03:50.140: INFO: Waiting up to 5m0s for pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42" in namespace "e2e-tests-svcaccounts-4slkk" to be "success or failure" Feb 9 12:03:50.452: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 311.346532ms Feb 9 12:03:52.486: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.345821234s Feb 9 12:03:54.515: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374987049s Feb 9 12:03:56.719: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578281819s Feb 9 12:03:58.734: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593563038s Feb 9 12:04:00.840: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699465644s Feb 9 12:04:02.864: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723193051s Feb 9 12:04:04.885: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Pending", Reason="", readiness=false. Elapsed: 14.745076625s Feb 9 12:04:07.031: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.891042363s STEP: Saw pod success Feb 9 12:04:07.032: INFO: Pod "pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42" satisfied condition "success or failure" Feb 9 12:04:07.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42 container namespace-test: STEP: delete the pod Feb 9 12:04:07.122: INFO: Waiting for pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42 to disappear Feb 9 12:04:07.179: INFO: Pod pod-service-account-26129a24-4b34-11ea-aa78-0242ac110005-rzs42 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:04:07.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-4slkk" for this suite. Feb 9 12:04:15.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:04:15.345: INFO: namespace: e2e-tests-svcaccounts-4slkk, resource: bindings, ignored listing per whitelist Feb 9 12:04:15.377: INFO: namespace e2e-tests-svcaccounts-4slkk deletion completed in 8.187297577s • [SLOW TEST:62.395 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:04:15.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4aece001-4b34-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:04:15.602: INFO: Waiting up to 5m0s for pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-55bnr" to be "success or failure" Feb 9 12:04:15.619: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.883963ms Feb 9 12:04:17.895: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293017986s Feb 9 12:04:19.906: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304729697s Feb 9 12:04:22.031: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429389261s Feb 9 12:04:24.108: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50654075s Feb 9 12:04:26.123: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.521840126s STEP: Saw pod success Feb 9 12:04:26.124: INFO: Pod "pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:04:26.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 12:04:26.292: INFO: Waiting for pod pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005 to disappear Feb 9 12:04:26.301: INFO: Pod pod-secrets-4af0c365-4b34-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:04:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-55bnr" for this suite. Feb 9 12:04:33.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:04:33.465: INFO: namespace: e2e-tests-secrets-55bnr, resource: bindings, ignored listing per whitelist Feb 9 12:04:33.569: INFO: namespace e2e-tests-secrets-55bnr deletion completed in 7.251207483s • [SLOW TEST:18.192 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:04:33.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 9 12:04:44.695: INFO: Successfully updated pod "labelsupdate55d8698d-4b34-11ea-aa78-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:04:46.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-b2wpm" for this suite. Feb 9 12:05:10.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:05:10.955: INFO: namespace: e2e-tests-downward-api-b2wpm, resource: bindings, ignored listing per whitelist Feb 9 12:05:10.979: INFO: namespace e2e-tests-downward-api-b2wpm deletion completed in 24.177029758s • [SLOW TEST:37.410 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:05:10.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 9 12:05:11.182: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 9 12:05:16.198: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:05:18.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-fhzhq" for this suite. Feb 9 12:05:29.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:05:29.547: INFO: namespace: e2e-tests-replication-controller-fhzhq, resource: bindings, ignored listing per whitelist Feb 9 12:05:29.717: INFO: namespace e2e-tests-replication-controller-fhzhq deletion completed in 10.587725244s • [SLOW TEST:18.738 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:05:29.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:05:30.542: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 9 12:05:30.740: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 9 12:05:35.781: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 9 12:05:44.089: INFO: Creating deployment "test-rolling-update-deployment" Feb 9 12:05:44.127: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 9 12:05:44.221: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 9 12:05:46.248: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 9 12:05:46.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 12:05:48.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 12:05:50.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 12:05:52.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716846744, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 9 12:05:54.812: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 9 12:05:55.057: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-f58nw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f58nw/deployments/test-rolling-update-deployment,UID:7fb1b40b-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084830,Generation:1,CreationTimestamp:2020-02-09 12:05:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-09 12:05:44 +0000 UTC 2020-02-09 12:05:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-09 12:05:53 +0000 UTC 2020-02-09 12:05:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 9 12:05:55.071: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-f58nw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f58nw/replicasets/test-rolling-update-deployment-75db98fb4c,UID:7fcf4d6f-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084821,Generation:1,CreationTimestamp:2020-02-09 12:05:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7fb1b40b-4b34-11ea-a994-fa163e34d433 0xc00216ec37 0xc00216ec38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 9 12:05:55.071: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 9 12:05:55.072: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-f58nw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f58nw/replicasets/test-rolling-update-controller,UID:779f4785-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084829,Generation:2,CreationTimestamp:2020-02-09 12:05:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7fb1b40b-4b34-11ea-a994-fa163e34d433 0xc00216eb77 0xc00216eb78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 9 12:05:55.103: INFO: Pod "test-rolling-update-deployment-75db98fb4c-xfsvr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-xfsvr,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-f58nw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-f58nw/pods/test-rolling-update-deployment-75db98fb4c-xfsvr,UID:7fd18b0a-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084820,Generation:0,CreationTimestamp:2020-02-09 12:05:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 7fcf4d6f-4b34-11ea-a994-fa163e34d433 0xc001386367 0xc001386368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-678hl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-678hl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-678hl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001386440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0013864e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:05:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:05:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:05:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:05:44 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-09 12:05:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-09 12:05:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0827ec8cdb11cc14e9b5e145ce371eabd0d594ffc6fc6b11efdd190c939ce89e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:05:55.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-f58nw" for this suite. Feb 9 12:06:03.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:06:03.232: INFO: namespace: e2e-tests-deployment-f58nw, resource: bindings, ignored listing per whitelist Feb 9 12:06:03.289: INFO: namespace e2e-tests-deployment-f58nw deletion completed in 8.178423339s • [SLOW TEST:33.571 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:06:03.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 9 12:06:04.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:06.636: INFO: stderr: "" Feb 9 12:06:06.636: INFO: stdout: "pod/pause created\n" Feb 9 12:06:06.636: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 9 12:06:06.637: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-k6lc8" to be "running and ready" Feb 9 12:06:06.652: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.743451ms Feb 9 12:06:08.680: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043122184s Feb 9 12:06:10.692: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055176284s Feb 9 12:06:12.996: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359084277s Feb 9 12:06:15.011: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37463998s Feb 9 12:06:17.022: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.385586479s Feb 9 12:06:17.022: INFO: Pod "pause" satisfied condition "running and ready" Feb 9 12:06:17.022: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 9 12:06:17.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:17.230: INFO: stderr: "" Feb 9 12:06:17.230: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 9 12:06:17.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:17.451: INFO: stderr: "" Feb 9 12:06:17.451: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 9 12:06:17.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:17.655: INFO: stderr: "" Feb 9 12:06:17.656: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 9 12:06:17.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:17.788: INFO: stderr: "" Feb 9 12:06:17.788: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 9 12:06:17.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:18.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:06:18.128: INFO: stdout: "pod \"pause\" force deleted\n" Feb 9 12:06:18.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-k6lc8' Feb 9 12:06:18.289: INFO: stderr: "No resources found.\n" Feb 9 12:06:18.289: INFO: stdout: "" Feb 9 12:06:18.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-k6lc8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 9 12:06:18.419: INFO: stderr: "" Feb 9 12:06:18.420: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:06:18.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k6lc8" for this suite. Feb 9 12:06:24.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:06:24.645: INFO: namespace: e2e-tests-kubectl-k6lc8, resource: bindings, ignored listing per whitelist Feb 9 12:06:24.670: INFO: namespace e2e-tests-kubectl-k6lc8 deletion completed in 6.235586644s • [SLOW TEST:21.381 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:06:24.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 12:06:24.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-bfh7f" to be "success or failure" Feb 9 12:06:24.894: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.986812ms Feb 9 12:06:26.909: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028393758s Feb 9 12:06:28.967: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086592479s Feb 9 12:06:31.585: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704912055s Feb 9 12:06:33.605: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724698556s Feb 9 12:06:35.615: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.734741988s STEP: Saw pod success Feb 9 12:06:35.615: INFO: Pod "downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:06:35.618: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 12:06:35.720: INFO: Waiting for pod downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005 to disappear Feb 9 12:06:36.552: INFO: Pod downwardapi-volume-97fd9ff5-4b34-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:06:36.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bfh7f" for this suite. Feb 9 12:06:44.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:06:45.036: INFO: namespace: e2e-tests-downward-api-bfh7f, resource: bindings, ignored listing per whitelist Feb 9 12:06:45.104: INFO: namespace e2e-tests-downward-api-bfh7f deletion completed in 8.501129975s • [SLOW TEST:20.434 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:06:45.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 9 12:06:45.273: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 9 12:06:45.286: INFO: Waiting for terminating namespaces to be deleted... Feb 9 12:06:45.288: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 9 12:06:45.299: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 9 12:06:45.299: INFO: Container weave ready: true, restart count 0 Feb 9 12:06:45.299: INFO: Container weave-npc ready: true, restart count 0 Feb 9 12:06:45.299: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 9 12:06:45.299: INFO: Container coredns ready: true, restart count 0 Feb 9 12:06:45.299: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 12:06:45.299: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 12:06:45.299: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 9 12:06:45.299: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 9 12:06:45.299: INFO: Container coredns ready: true, restart count 0 Feb 9 12:06:45.299: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 9 12:06:45.299: INFO: Container kube-proxy ready: true, restart count 0 Feb 9 12:06:45.299: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f1ba808798b1b3], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:06:46.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-vmgwk" for this suite. Feb 9 12:06:52.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:06:52.617: INFO: namespace: e2e-tests-sched-pred-vmgwk, resource: bindings, ignored listing per whitelist Feb 9 12:06:52.692: INFO: namespace e2e-tests-sched-pred-vmgwk deletion completed in 6.24646374s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.587 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:06:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 9 12:06:53.008: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084992,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 9 12:06:53.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084993,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 9 12:06:53.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21084994,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 9 12:07:03.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21085008,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 9 12:07:03.096: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21085009,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 9 12:07:03.096: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-vwdmk,SelfLink:/api/v1/namespaces/e2e-tests-watch-vwdmk/configmaps/e2e-watch-test-label-changed,UID:a8b77841-4b34-11ea-a994-fa163e34d433,ResourceVersion:21085010,Generation:0,CreationTimestamp:2020-02-09 12:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:07:03.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vwdmk" for this suite. Feb 9 12:07:09.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:07:09.409: INFO: namespace: e2e-tests-watch-vwdmk, resource: bindings, ignored listing per whitelist Feb 9 12:07:09.507: INFO: namespace e2e-tests-watch-vwdmk deletion completed in 6.366714516s • [SLOW TEST:16.814 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:07:09.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:07:09.708: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:07:20.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lpsr8" for this suite. Feb 9 12:08:14.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:08:14.359: INFO: namespace: e2e-tests-pods-lpsr8, resource: bindings, ignored listing per whitelist Feb 9 12:08:14.500: INFO: namespace e2e-tests-pods-lpsr8 deletion completed in 54.303415216s • [SLOW TEST:64.992 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:08:14.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:08:24.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n6dnq" for this suite. Feb 9 12:09:18.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:09:19.006: INFO: namespace: e2e-tests-kubelet-test-n6dnq, resource: bindings, ignored listing per whitelist Feb 9 12:09:19.037: INFO: namespace e2e-tests-kubelet-test-n6dnq deletion completed in 54.232144886s • [SLOW TEST:64.537 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:09:19.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nsswp Feb 9 12:09:29.266: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nsswp STEP: checking the pod's current state and verifying that restartCount is present Feb 9 12:09:29.279: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:13:31.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nsswp" for this suite. Feb 9 12:13:39.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:13:39.588: INFO: namespace: e2e-tests-container-probe-nsswp, resource: bindings, ignored listing per whitelist Feb 9 12:13:39.592: INFO: namespace e2e-tests-container-probe-nsswp deletion completed in 8.318827053s • [SLOW TEST:260.555 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:13:39.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-9b35ea49-4b35-11ea-aa78-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:13:53.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4cb78" for this suite. Feb 9 12:14:18.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:14:18.258: INFO: namespace: e2e-tests-configmap-4cb78, resource: bindings, ignored listing per whitelist Feb 9 12:14:18.297: INFO: namespace e2e-tests-configmap-4cb78 deletion completed in 24.239069656s • [SLOW TEST:38.704 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:14:18.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-b23f76c6-4b35-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:14:18.426: INFO: Waiting up to 5m0s for pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-sxn6j" to be "success or failure" Feb 9 12:14:18.505: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 79.754132ms Feb 9 12:14:20.546: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12011652s Feb 9 12:14:22.611: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185087296s Feb 9 12:14:25.044: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618716617s Feb 9 12:14:27.085: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658921861s Feb 9 12:14:29.098: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672445977s STEP: Saw pod success Feb 9 12:14:29.098: INFO: Pod "pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:14:29.108: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 12:14:29.342: INFO: Waiting for pod pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005 to disappear Feb 9 12:14:29.362: INFO: Pod pod-secrets-b24055a1-4b35-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:14:29.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sxn6j" for this suite. Feb 9 12:14:36.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:14:36.773: INFO: namespace: e2e-tests-secrets-sxn6j, resource: bindings, ignored listing per whitelist Feb 9 12:14:36.792: INFO: namespace e2e-tests-secrets-sxn6j deletion completed in 7.417671794s • [SLOW TEST:18.495 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:14:36.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005 Feb 9 12:14:37.027: INFO: Pod name my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005: Found 0 pods out of 1 Feb 9 12:14:42.049: INFO: Pod name my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005: Found 1 pods out of 1 Feb 9 12:14:42.049: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005" are running Feb 9 12:14:48.189: INFO: Pod "my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005-xwtqk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 12:14:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 12:14:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 12:14:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 12:14:37 +0000 UTC Reason: Message:}]) Feb 9 12:14:48.189: INFO: Trying to dial the pod Feb 9 12:14:53.224: INFO: Controller my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005: Got expected result from replica 1 [my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005-xwtqk]: "my-hostname-basic-bd52f1be-4b35-11ea-aa78-0242ac110005-xwtqk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:14:53.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-jhtsx" for this suite. Feb 9 12:14:59.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:14:59.418: INFO: namespace: e2e-tests-replication-controller-jhtsx, resource: bindings, ignored listing per whitelist Feb 9 12:14:59.475: INFO: namespace e2e-tests-replication-controller-jhtsx deletion completed in 6.24250689s • [SLOW TEST:22.683 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:14:59.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cpw76 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpw76 to expose endpoints map[] Feb 9 12:14:59.766: INFO: Get endpoints failed (8.525319ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 9 12:15:00.783: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpw76 exposes endpoints map[] (1.025780037s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cpw76 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpw76 to expose endpoints map[pod1:[100]] Feb 9 12:15:07.249: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.431912624s elapsed, will retry) Feb 9 12:15:13.794: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (12.976987673s elapsed, will retry) Feb 9 12:15:14.811: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpw76 exposes endpoints map[pod1:[100]] (13.993669034s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cpw76 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpw76 to expose endpoints map[pod1:[100] pod2:[101]] Feb 9 12:15:21.382: INFO: Unexpected endpoints: found map[cb832d3d-4b35-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (6.558499126s elapsed, will retry) Feb 9 12:15:24.527: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpw76 exposes endpoints map[pod1:[100] pod2:[101]] (9.704104882s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cpw76 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpw76 to expose endpoints map[pod2:[101]] Feb 9 12:15:24.695: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpw76 exposes endpoints map[pod2:[101]] (89.092166ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cpw76 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cpw76 to expose endpoints map[] Feb 9 12:15:24.837: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cpw76 exposes endpoints map[] (25.901647ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:15:25.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cpw76" for this suite. Feb 9 12:15:47.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:15:48.015: INFO: namespace: e2e-tests-services-cpw76, resource: bindings, ignored listing per whitelist Feb 9 12:15:48.089: INFO: namespace e2e-tests-services-cpw76 deletion completed in 23.041247767s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.613 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:15:48.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-e7c93ae1-4b35-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:15:48.334: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-5l4bh" to be "success or failure" Feb 9 12:15:48.403: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.563458ms Feb 9 12:15:50.531: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197239285s Feb 9 12:15:52.570: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236199958s Feb 9 12:15:54.874: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.539914136s Feb 9 12:15:56.886: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552493825s Feb 9 12:15:59.189: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.855294591s Feb 9 12:16:01.217: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.882747246s STEP: Saw pod success Feb 9 12:16:01.217: INFO: Pod "pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:16:01.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 9 12:16:01.566: INFO: Waiting for pod pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005 to disappear Feb 9 12:16:01.586: INFO: Pod pod-projected-secrets-e7d4fb83-4b35-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:16:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5l4bh" for this suite. Feb 9 12:16:07.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:16:07.698: INFO: namespace: e2e-tests-projected-5l4bh, resource: bindings, ignored listing per whitelist Feb 9 12:16:07.756: INFO: namespace e2e-tests-projected-5l4bh deletion completed in 6.147742825s • [SLOW TEST:19.666 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:16:07.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-zvbw2/configmap-test-f396cc46-4b35-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 12:16:08.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-zvbw2" to be "success or failure" Feb 9 12:16:08.213: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 152.702454ms Feb 9 12:16:10.581: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520560038s Feb 9 12:16:12.612: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551829018s Feb 9 12:16:14.659: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598511244s Feb 9 12:16:16.672: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.611406381s Feb 9 12:16:18.694: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633176061s STEP: Saw pod success Feb 9 12:16:18.694: INFO: Pod "pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:16:18.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005 container env-test: STEP: delete the pod Feb 9 12:16:18.833: INFO: Waiting for pod pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005 to disappear Feb 9 12:16:18.847: INFO: Pod pod-configmaps-f3978e42-4b35-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:16:18.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zvbw2" for this suite. Feb 9 12:16:25.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:16:25.162: INFO: namespace: e2e-tests-configmap-zvbw2, resource: bindings, ignored listing per whitelist Feb 9 12:16:25.215: INFO: namespace e2e-tests-configmap-zvbw2 deletion completed in 6.250688211s • [SLOW TEST:17.458 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:16:25.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 9 12:16:33.766: INFO: 10 pods remaining Feb 9 12:16:33.767: INFO: 10 pods has nil DeletionTimestamp Feb 9 12:16:33.767: INFO: Feb 9 12:16:35.180: INFO: 10 pods remaining Feb 9 12:16:35.180: INFO: 7 pods has nil DeletionTimestamp Feb 9 12:16:35.180: INFO: Feb 9 12:16:35.935: INFO: 0 pods remaining Feb 9 12:16:35.935: INFO: 0 pods has nil DeletionTimestamp Feb 9 12:16:35.935: INFO: Feb 9 12:16:36.605: INFO: 0 pods remaining Feb 9 12:16:36.605: INFO: 0 pods has nil DeletionTimestamp Feb 9 12:16:36.605: INFO: STEP: Gathering metrics W0209 12:16:37.481351 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 9 12:16:37.481: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:16:37.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xms9f" for this suite. Feb 9 12:16:49.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:16:49.981: INFO: namespace: e2e-tests-gc-xms9f, resource: bindings, ignored listing per whitelist Feb 9 12:16:50.015: INFO: namespace e2e-tests-gc-xms9f deletion completed in 12.529665436s • [SLOW TEST:24.800 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:16:50.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n6s47 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 9 12:16:51.138: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 9 12:17:31.467: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-n6s47 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 9 12:17:31.467: INFO: >>> kubeConfig: /root/.kube/config I0209 12:17:31.569639 8 log.go:172] (0xc0018060b0) (0xc00146aa00) Create stream I0209 12:17:31.569709 8 log.go:172] (0xc0018060b0) (0xc00146aa00) Stream added, broadcasting: 1 I0209 12:17:31.578830 8 log.go:172] (0xc0018060b0) Reply frame received for 1 I0209 12:17:31.579009 8 log.go:172] (0xc0018060b0) (0xc001a02780) Create stream I0209 12:17:31.579035 8 log.go:172] (0xc0018060b0) (0xc001a02780) Stream added, broadcasting: 3 I0209 12:17:31.582231 8 log.go:172] (0xc0018060b0) Reply frame received for 3 I0209 12:17:31.582270 8 log.go:172] (0xc0018060b0) (0xc00233c000) Create stream I0209 12:17:31.582287 8 log.go:172] (0xc0018060b0) (0xc00233c000) Stream added, broadcasting: 5 I0209 12:17:31.584380 8 log.go:172] (0xc0018060b0) Reply frame received for 5 I0209 12:17:32.751411 8 log.go:172] (0xc0018060b0) Data frame received for 3 I0209 12:17:32.751673 8 log.go:172] (0xc001a02780) (3) Data frame handling I0209 12:17:32.751759 8 log.go:172] (0xc001a02780) (3) Data frame sent I0209 12:17:32.912983 8 log.go:172] (0xc0018060b0) (0xc001a02780) Stream removed, broadcasting: 3 I0209 12:17:32.913248 8 log.go:172] (0xc0018060b0) Data frame received for 1 I0209 12:17:32.913316 8 log.go:172] (0xc00146aa00) (1) Data frame handling I0209 12:17:32.913394 8 log.go:172] (0xc00146aa00) (1) Data frame sent I0209 12:17:32.913424 8 log.go:172] (0xc0018060b0) (0xc00146aa00) Stream removed, broadcasting: 1 I0209 12:17:32.913446 8 log.go:172] (0xc0018060b0) (0xc00233c000) Stream removed, broadcasting: 5 I0209 12:17:32.913651 8 log.go:172] (0xc0018060b0) Go away received I0209 12:17:32.913903 8 log.go:172] (0xc0018060b0) (0xc00146aa00) Stream removed, broadcasting: 1 I0209 12:17:32.913946 8 log.go:172] (0xc0018060b0) (0xc001a02780) Stream removed, broadcasting: 3 I0209 12:17:32.914020 8 log.go:172] (0xc0018060b0) (0xc00233c000) Stream removed, broadcasting: 5 Feb 9 12:17:32.914: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:17:32.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-n6s47" for this suite. Feb 9 12:17:56.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:17:57.111: INFO: namespace: e2e-tests-pod-network-test-n6s47, resource: bindings, ignored listing per whitelist Feb 9 12:17:57.183: INFO: namespace e2e-tests-pod-network-test-n6s47 deletion completed in 24.245859255s • [SLOW TEST:67.167 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:17:57.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 9 12:17:57.396: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 9 12:17:57.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:17:59.561: INFO: stderr: "" Feb 9 12:17:59.561: INFO: stdout: "service/redis-slave created\n" Feb 9 12:17:59.563: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 9 12:17:59.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:00.147: INFO: stderr: "" Feb 9 12:18:00.147: INFO: stdout: "service/redis-master created\n" Feb 9 12:18:00.148: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 9 12:18:00.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:00.720: INFO: stderr: "" Feb 9 12:18:00.721: INFO: stdout: "service/frontend created\n" Feb 9 12:18:00.721: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 9 12:18:00.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:01.098: INFO: stderr: "" Feb 9 12:18:01.098: INFO: stdout: "deployment.extensions/frontend created\n" Feb 9 12:18:01.099: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 9 12:18:01.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:01.673: INFO: stderr: "" Feb 9 12:18:01.673: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 9 12:18:01.675: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 9 12:18:01.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:02.291: INFO: stderr: "" Feb 9 12:18:02.291: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 9 12:18:02.291: INFO: Waiting for all frontend pods to be Running. Feb 9 12:18:37.346: INFO: Waiting for frontend to serve content. Feb 9 12:18:37.411: INFO: Trying to add a new entry to the guestbook. Feb 9 12:18:37.451: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 9 12:18:37.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:38.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:38.561: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 9 12:18:38.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:38.928: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:38.929: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 9 12:18:38.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:39.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:39.328: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 9 12:18:39.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:39.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:39.468: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 9 12:18:39.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:39.704: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:39.704: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 9 12:18:39.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wgjn5' Feb 9 12:18:40.154: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:18:40.155: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:18:40.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wgjn5" for this suite. Feb 9 12:19:26.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:19:26.776: INFO: namespace: e2e-tests-kubectl-wgjn5, resource: bindings, ignored listing per whitelist Feb 9 12:19:26.803: INFO: namespace e2e-tests-kubectl-wgjn5 deletion completed in 46.57975648s • [SLOW TEST:89.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:19:26.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 12:19:26.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-kfzgs" to be "success or failure" Feb 9 12:19:26.997: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.774694ms Feb 9 12:19:29.457: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475625187s Feb 9 12:19:31.484: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503305995s Feb 9 12:19:34.028: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.047233814s Feb 9 12:19:36.048: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.066779436s Feb 9 12:19:38.058: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.077406615s STEP: Saw pod success Feb 9 12:19:38.058: INFO: Pod "downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:19:38.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 12:19:38.527: INFO: Waiting for pod downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005 to disappear Feb 9 12:19:38.851: INFO: Pod downwardapi-volume-6a2aac17-4b36-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:19:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kfzgs" for this suite. Feb 9 12:19:44.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:19:45.163: INFO: namespace: e2e-tests-projected-kfzgs, resource: bindings, ignored listing per whitelist Feb 9 12:19:45.234: INFO: namespace e2e-tests-projected-kfzgs deletion completed in 6.362290428s • [SLOW TEST:18.431 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:19:45.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7532f24a-4b36-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:19:45.512: INFO: Waiting up to 5m0s for pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-td7sj" to be "success or failure" Feb 9 12:19:45.528: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.174173ms Feb 9 12:19:47.749: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23711592s Feb 9 12:19:49.774: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262050825s Feb 9 12:19:52.656: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.144181766s Feb 9 12:19:54.677: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.164903579s Feb 9 12:19:56.699: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.187537453s Feb 9 12:19:58.728: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.216012798s STEP: Saw pod success Feb 9 12:19:58.728: INFO: Pod "pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:19:58.748: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 12:19:58.793: INFO: Waiting for pod pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005 to disappear Feb 9 12:19:58.801: INFO: Pod pod-secrets-753462b6-4b36-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:19:58.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-td7sj" for this suite. Feb 9 12:20:04.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:20:04.984: INFO: namespace: e2e-tests-secrets-td7sj, resource: bindings, ignored listing per whitelist Feb 9 12:20:05.015: INFO: namespace e2e-tests-secrets-td7sj deletion completed in 6.204659772s • [SLOW TEST:19.780 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:20:05.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-80f6b369-4b36-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 9 12:20:05.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-6cjbl" to be "success or failure" Feb 9 12:20:05.286: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.039957ms Feb 9 12:20:07.300: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060997556s Feb 9 12:20:09.316: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077186038s Feb 9 12:20:11.332: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092943605s Feb 9 12:20:13.346: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10735254s Feb 9 12:20:15.361: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122494213s STEP: Saw pod success Feb 9 12:20:15.362: INFO: Pod "pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:20:15.369: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 9 12:20:15.476: INFO: Waiting for pod pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005 to disappear Feb 9 12:20:15.488: INFO: Pod pod-projected-configmaps-80f780b1-4b36-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:20:15.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6cjbl" for this suite. Feb 9 12:20:23.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:20:23.783: INFO: namespace: e2e-tests-projected-6cjbl, resource: bindings, ignored listing per whitelist Feb 9 12:20:23.905: INFO: namespace e2e-tests-projected-6cjbl deletion completed in 8.405635561s • [SLOW TEST:18.890 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:20:23.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 9 12:20:24.309: INFO: Waiting up to 5m0s for pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-4d75b" to be "success or failure" Feb 9 12:20:24.331: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.230131ms Feb 9 12:20:26.352: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04296209s Feb 9 12:20:28.369: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060133332s Feb 9 12:20:30.809: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.500235821s Feb 9 12:20:33.126: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817078323s Feb 9 12:20:35.147: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.837886394s Feb 9 12:20:37.160: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.85094644s STEP: Saw pod success Feb 9 12:20:37.160: INFO: Pod "downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:20:37.167: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 12:20:38.014: INFO: Waiting for pod downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005 to disappear Feb 9 12:20:38.061: INFO: Pod downward-api-8c4534b6-4b36-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:20:38.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4d75b" for this suite. Feb 9 12:20:46.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:20:46.689: INFO: namespace: e2e-tests-downward-api-4d75b, resource: bindings, ignored listing per whitelist Feb 9 12:20:46.703: INFO: namespace e2e-tests-downward-api-4d75b deletion completed in 8.386634751s • [SLOW TEST:22.797 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:20:46.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:21:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-bkffm" for this suite. Feb 9 12:21:24.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:21:24.282: INFO: namespace: e2e-tests-replication-controller-bkffm, resource: bindings, ignored listing per whitelist Feb 9 12:21:24.298: INFO: namespace e2e-tests-replication-controller-bkffm deletion completed in 24.158815076s • [SLOW TEST:37.594 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:21:24.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jv67h Feb 9 12:21:34.797: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jv67h STEP: checking the pod's current state and verifying that restartCount is present Feb 9 12:21:34.807: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:25:35.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jv67h" for this suite. Feb 9 12:25:41.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:25:41.631: INFO: namespace: e2e-tests-container-probe-jv67h, resource: bindings, ignored listing per whitelist Feb 9 12:25:41.656: INFO: namespace e2e-tests-container-probe-jv67h deletion completed in 6.291672207s • [SLOW TEST:257.357 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:25:41.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-49b23e40-4b37-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:25:42.199: INFO: Waiting up to 5m0s for pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-7r5cs" to be "success or failure" Feb 9 12:25:42.220: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.163688ms Feb 9 12:25:44.234: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035795399s Feb 9 12:25:46.300: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101086753s Feb 9 12:25:48.365: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166235378s Feb 9 12:25:50.611: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.411888296s Feb 9 12:25:52.679: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.48050632s STEP: Saw pod success Feb 9 12:25:52.679: INFO: Pod "pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:25:52.698: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 12:25:52.874: INFO: Waiting for pod pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005 to disappear Feb 9 12:25:52.880: INFO: Pod pod-secrets-49ceb8bd-4b37-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:25:52.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7r5cs" for this suite. Feb 9 12:25:58.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:25:59.077: INFO: namespace: e2e-tests-secrets-7r5cs, resource: bindings, ignored listing per whitelist Feb 9 12:25:59.099: INFO: namespace e2e-tests-secrets-7r5cs deletion completed in 6.213123903s STEP: Destroying namespace "e2e-tests-secret-namespace-wvq2t" for this suite. Feb 9 12:26:05.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:26:05.276: INFO: namespace: e2e-tests-secret-namespace-wvq2t, resource: bindings, ignored listing per whitelist Feb 9 12:26:05.279: INFO: namespace e2e-tests-secret-namespace-wvq2t deletion completed in 6.179576382s • [SLOW TEST:23.623 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:26:05.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:26:15.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-r86j7" for this suite. Feb 9 12:26:58.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:26:58.153: INFO: namespace: e2e-tests-kubelet-test-r86j7, resource: bindings, ignored listing per whitelist Feb 9 12:26:58.290: INFO: namespace e2e-tests-kubelet-test-r86j7 deletion completed in 42.215124164s • [SLOW TEST:53.011 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:26:58.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 9 12:26:58.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 9 12:26:58.696: INFO: stderr: "" Feb 9 12:26:58.696: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:26:58.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-srqdd" for this suite. Feb 9 12:27:04.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:27:04.899: INFO: namespace: e2e-tests-kubectl-srqdd, resource: bindings, ignored listing per whitelist Feb 9 12:27:05.030: INFO: namespace e2e-tests-kubectl-srqdd deletion completed in 6.308755844s • [SLOW TEST:6.739 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:27:05.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 12:27:05.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-fskbf" to be "success or failure" Feb 9 12:27:05.394: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.505897ms Feb 9 12:27:07.641: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401761161s Feb 9 12:27:09.668: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428470787s Feb 9 12:27:12.112: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.872917674s Feb 9 12:27:14.238: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.998718093s Feb 9 12:27:16.712: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.472643227s STEP: Saw pod success Feb 9 12:27:16.712: INFO: Pod "downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:27:16.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 12:27:17.007: INFO: Waiting for pod downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005 to disappear Feb 9 12:27:17.026: INFO: Pod downwardapi-volume-7b4e3e08-4b37-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:27:17.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fskbf" for this suite. Feb 9 12:27:23.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:27:23.375: INFO: namespace: e2e-tests-downward-api-fskbf, resource: bindings, ignored listing per whitelist Feb 9 12:27:23.390: INFO: namespace e2e-tests-downward-api-fskbf deletion completed in 6.318035369s • [SLOW TEST:18.360 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:27:23.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 9 12:27:23.802: INFO: Waiting up to 5m0s for pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-rkgn9" to be "success or failure" Feb 9 12:27:23.817: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.991239ms Feb 9 12:27:25.838: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036313578s Feb 9 12:27:27.882: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080675212s Feb 9 12:27:30.207: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4057186s Feb 9 12:27:32.243: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441103314s Feb 9 12:27:34.262: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.460054621s STEP: Saw pod success Feb 9 12:27:34.262: INFO: Pod "downward-api-864862e0-4b37-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:27:34.270: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-864862e0-4b37-11ea-aa78-0242ac110005 container dapi-container: STEP: delete the pod Feb 9 12:27:35.333: INFO: Waiting for pod downward-api-864862e0-4b37-11ea-aa78-0242ac110005 to disappear Feb 9 12:27:35.559: INFO: Pod downward-api-864862e0-4b37-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:27:35.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rkgn9" for this suite. Feb 9 12:27:41.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:27:41.841: INFO: namespace: e2e-tests-downward-api-rkgn9, resource: bindings, ignored listing per whitelist Feb 9 12:27:42.119: INFO: namespace e2e-tests-downward-api-rkgn9 deletion completed in 6.540544527s • [SLOW TEST:18.728 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:27:42.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 9 12:28:04.632: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 12:28:04.747: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 12:28:06.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 12:28:06.778: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 12:28:08.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 12:28:08.766: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 12:28:10.748: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 12:28:10.808: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:28:10.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-58xlk" for this suite. Feb 9 12:28:34.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:28:35.008: INFO: namespace: e2e-tests-container-lifecycle-hook-58xlk, resource: bindings, ignored listing per whitelist Feb 9 12:28:35.024: INFO: namespace e2e-tests-container-lifecycle-hook-58xlk deletion completed in 24.160044579s • [SLOW TEST:52.904 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:28:35.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 9 12:28:35.192: INFO: Waiting up to 5m0s for pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005" in namespace "e2e-tests-containers-8drbz" to be "success or failure" Feb 9 12:28:35.221: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.816725ms Feb 9 12:28:37.233: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040326282s Feb 9 12:28:39.255: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062328603s Feb 9 12:28:41.834: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641407457s Feb 9 12:28:43.857: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66453156s Feb 9 12:28:45.872: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.679624004s STEP: Saw pod success Feb 9 12:28:45.872: INFO: Pod "client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:28:45.877: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 12:28:46.658: INFO: Waiting for pod client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005 to disappear Feb 9 12:28:46.683: INFO: Pod client-containers-b0ed1182-4b37-11ea-aa78-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:28:46.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8drbz" for this suite. Feb 9 12:28:53.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:28:53.410: INFO: namespace: e2e-tests-containers-8drbz, resource: bindings, ignored listing per whitelist Feb 9 12:28:53.421: INFO: namespace e2e-tests-containers-8drbz deletion completed in 6.327040139s • [SLOW TEST:18.396 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:28:53.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2gkvt I0209 12:28:53.637373 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2gkvt, replica count: 1 I0209 12:28:54.688737 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:28:55.689414 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:28:56.689960 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:28:57.690492 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:28:58.691503 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:28:59.692357 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:29:00.693049 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:29:01.693860 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:29:02.694364 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:29:03.695639 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0209 12:29:04.696549 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 9 12:29:04.863: INFO: Created: latency-svc-4lbq7 Feb 9 12:29:05.037: INFO: Got endpoints: latency-svc-4lbq7 [240.448827ms] Feb 9 12:29:05.268: INFO: Created: latency-svc-2vng6 Feb 9 12:29:05.301: INFO: Got endpoints: latency-svc-2vng6 [263.558136ms] Feb 9 12:29:05.446: INFO: Created: latency-svc-n2ps8 Feb 9 12:29:05.476: INFO: Got endpoints: latency-svc-n2ps8 [436.832962ms] Feb 9 12:29:05.526: INFO: Created: latency-svc-rn8zf Feb 9 12:29:05.694: INFO: Got endpoints: latency-svc-rn8zf [656.035089ms] Feb 9 12:29:05.727: INFO: Created: latency-svc-shprw Feb 9 12:29:05.747: INFO: Got endpoints: latency-svc-shprw [707.140896ms] Feb 9 12:29:05.883: INFO: Created: latency-svc-z5fsd Feb 9 12:29:05.894: INFO: Got endpoints: latency-svc-z5fsd [854.222757ms] Feb 9 12:29:05.945: INFO: Created: latency-svc-54lnv Feb 9 12:29:05.971: INFO: Got endpoints: latency-svc-54lnv [931.532442ms] Feb 9 12:29:06.129: INFO: Created: latency-svc-gslsr Feb 9 12:29:06.151: INFO: Got endpoints: latency-svc-gslsr [1.111643164s] Feb 9 12:29:06.221: INFO: Created: latency-svc-fj4x6 Feb 9 12:29:06.333: INFO: Got endpoints: latency-svc-fj4x6 [1.292946001s] Feb 9 12:29:06.367: INFO: Created: latency-svc-4wsmj Feb 9 12:29:06.388: INFO: Got endpoints: latency-svc-4wsmj [1.349333059s] Feb 9 12:29:06.420: INFO: Created: latency-svc-pszxs Feb 9 12:29:06.537: INFO: Got endpoints: latency-svc-pszxs [1.497044884s] Feb 9 12:29:06.597: INFO: Created: latency-svc-f9kwg Feb 9 12:29:06.718: INFO: Got endpoints: latency-svc-f9kwg [1.67768505s] Feb 9 12:29:06.740: INFO: Created: latency-svc-8mr96 Feb 9 12:29:06.743: INFO: Got endpoints: latency-svc-8mr96 [1.702554505s] Feb 9 12:29:06.789: INFO: Created: latency-svc-7j8ln Feb 9 12:29:06.967: INFO: Got endpoints: latency-svc-7j8ln [1.927747975s] Feb 9 12:29:07.005: INFO: Created: latency-svc-sfn6m Feb 9 12:29:07.040: INFO: Got endpoints: latency-svc-sfn6m [2.000376511s] Feb 9 12:29:07.255: INFO: Created: latency-svc-68mnm Feb 9 12:29:07.296: INFO: Got endpoints: latency-svc-68mnm [2.256201049s] Feb 9 12:29:07.484: INFO: Created: latency-svc-wjwsb Feb 9 12:29:07.501: INFO: Got endpoints: latency-svc-wjwsb [2.199710772s] Feb 9 12:29:07.579: INFO: Created: latency-svc-nwdcq Feb 9 12:29:07.803: INFO: Created: latency-svc-vlbdr Feb 9 12:29:07.818: INFO: Got endpoints: latency-svc-nwdcq [2.342755871s] Feb 9 12:29:07.857: INFO: Got endpoints: latency-svc-vlbdr [2.163090175s] Feb 9 12:29:07.947: INFO: Created: latency-svc-cgk2c Feb 9 12:29:08.020: INFO: Got endpoints: latency-svc-cgk2c [2.272154845s] Feb 9 12:29:08.023: INFO: Created: latency-svc-xmv2r Feb 9 12:29:08.160: INFO: Got endpoints: latency-svc-xmv2r [2.265374005s] Feb 9 12:29:08.185: INFO: Created: latency-svc-4gvrt Feb 9 12:29:08.214: INFO: Got endpoints: latency-svc-4gvrt [2.242342811s] Feb 9 12:29:08.274: INFO: Created: latency-svc-c54gl Feb 9 12:29:08.372: INFO: Got endpoints: latency-svc-c54gl [2.221359254s] Feb 9 12:29:08.400: INFO: Created: latency-svc-8bn2w Feb 9 12:29:08.432: INFO: Got endpoints: latency-svc-8bn2w [2.098658685s] Feb 9 12:29:08.649: INFO: Created: latency-svc-c6ls8 Feb 9 12:29:08.672: INFO: Got endpoints: latency-svc-c6ls8 [2.283892311s] Feb 9 12:29:08.750: INFO: Created: latency-svc-llhzf Feb 9 12:29:08.949: INFO: Got endpoints: latency-svc-llhzf [2.412466445s] Feb 9 12:29:09.040: INFO: Created: latency-svc-2ffzk Feb 9 12:29:09.232: INFO: Got endpoints: latency-svc-2ffzk [2.514272936s] Feb 9 12:29:09.290: INFO: Created: latency-svc-h7dt9 Feb 9 12:29:09.320: INFO: Got endpoints: latency-svc-h7dt9 [2.576872894s] Feb 9 12:29:09.450: INFO: Created: latency-svc-gcrv5 Feb 9 12:29:09.484: INFO: Got endpoints: latency-svc-gcrv5 [2.51638894s] Feb 9 12:29:09.624: INFO: Created: latency-svc-vwrx7 Feb 9 12:29:09.691: INFO: Got endpoints: latency-svc-vwrx7 [2.650788328s] Feb 9 12:29:09.708: INFO: Created: latency-svc-qk7g4 Feb 9 12:29:09.832: INFO: Got endpoints: latency-svc-qk7g4 [2.535793157s] Feb 9 12:29:09.912: INFO: Created: latency-svc-z4g88 Feb 9 12:29:09.918: INFO: Got endpoints: latency-svc-z4g88 [2.416065253s] Feb 9 12:29:10.194: INFO: Created: latency-svc-lvb24 Feb 9 12:29:10.228: INFO: Got endpoints: latency-svc-lvb24 [2.409745961s] Feb 9 12:29:10.394: INFO: Created: latency-svc-h4xrn Feb 9 12:29:10.404: INFO: Got endpoints: latency-svc-h4xrn [2.546345215s] Feb 9 12:29:10.583: INFO: Created: latency-svc-xxgm8 Feb 9 12:29:10.609: INFO: Got endpoints: latency-svc-xxgm8 [2.5892769s] Feb 9 12:29:10.805: INFO: Created: latency-svc-mzhn9 Feb 9 12:29:10.847: INFO: Got endpoints: latency-svc-mzhn9 [2.687081705s] Feb 9 12:29:10.885: INFO: Created: latency-svc-p7mr4 Feb 9 12:29:11.100: INFO: Got endpoints: latency-svc-p7mr4 [2.88642425s] Feb 9 12:29:11.123: INFO: Created: latency-svc-wmkzt Feb 9 12:29:11.175: INFO: Got endpoints: latency-svc-wmkzt [2.802688547s] Feb 9 12:29:11.437: INFO: Created: latency-svc-jlcwj Feb 9 12:29:11.464: INFO: Got endpoints: latency-svc-jlcwj [3.031552873s] Feb 9 12:29:11.524: INFO: Created: latency-svc-ttjx6 Feb 9 12:29:11.612: INFO: Got endpoints: latency-svc-ttjx6 [2.93987992s] Feb 9 12:29:11.642: INFO: Created: latency-svc-d46kr Feb 9 12:29:11.662: INFO: Got endpoints: latency-svc-d46kr [2.712799602s] Feb 9 12:29:11.822: INFO: Created: latency-svc-246d4 Feb 9 12:29:11.869: INFO: Got endpoints: latency-svc-246d4 [2.636042102s] Feb 9 12:29:11.911: INFO: Created: latency-svc-d4sqt Feb 9 12:29:12.031: INFO: Got endpoints: latency-svc-d4sqt [2.710418903s] Feb 9 12:29:12.073: INFO: Created: latency-svc-hrdn6 Feb 9 12:29:12.100: INFO: Got endpoints: latency-svc-hrdn6 [231.148729ms] Feb 9 12:29:12.308: INFO: Created: latency-svc-4769n Feb 9 12:29:12.308: INFO: Got endpoints: latency-svc-4769n [2.823250176s] Feb 9 12:29:12.444: INFO: Created: latency-svc-qch4s Feb 9 12:29:12.482: INFO: Got endpoints: latency-svc-qch4s [2.790356121s] Feb 9 12:29:12.674: INFO: Created: latency-svc-9btgs Feb 9 12:29:12.678: INFO: Got endpoints: latency-svc-9btgs [2.845035441s] Feb 9 12:29:12.947: INFO: Created: latency-svc-qc4nq Feb 9 12:29:12.956: INFO: Got endpoints: latency-svc-qc4nq [3.038503053s] Feb 9 12:29:13.115: INFO: Created: latency-svc-p7rf6 Feb 9 12:29:13.141: INFO: Got endpoints: latency-svc-p7rf6 [2.911861439s] Feb 9 12:29:13.392: INFO: Created: latency-svc-8tfdn Feb 9 12:29:13.410: INFO: Got endpoints: latency-svc-8tfdn [3.006077125s] Feb 9 12:29:13.553: INFO: Created: latency-svc-tmlpq Feb 9 12:29:13.581: INFO: Got endpoints: latency-svc-tmlpq [2.971336662s] Feb 9 12:29:13.641: INFO: Created: latency-svc-mjjl9 Feb 9 12:29:13.782: INFO: Got endpoints: latency-svc-mjjl9 [2.934466266s] Feb 9 12:29:13.834: INFO: Created: latency-svc-mrfxx Feb 9 12:29:13.858: INFO: Got endpoints: latency-svc-mrfxx [2.757385431s] Feb 9 12:29:13.993: INFO: Created: latency-svc-wgsjr Feb 9 12:29:14.029: INFO: Got endpoints: latency-svc-wgsjr [2.852556868s] Feb 9 12:29:14.224: INFO: Created: latency-svc-lrx7t Feb 9 12:29:14.266: INFO: Got endpoints: latency-svc-lrx7t [2.801287312s] Feb 9 12:29:14.406: INFO: Created: latency-svc-2kvh7 Feb 9 12:29:14.456: INFO: Got endpoints: latency-svc-2kvh7 [2.843116135s] Feb 9 12:29:14.585: INFO: Created: latency-svc-wl55w Feb 9 12:29:14.623: INFO: Got endpoints: latency-svc-wl55w [2.959679914s] Feb 9 12:29:14.803: INFO: Created: latency-svc-n55tm Feb 9 12:29:14.812: INFO: Got endpoints: latency-svc-n55tm [2.780912011s] Feb 9 12:29:14.873: INFO: Created: latency-svc-npjwz Feb 9 12:29:14.942: INFO: Got endpoints: latency-svc-npjwz [2.841039263s] Feb 9 12:29:15.007: INFO: Created: latency-svc-6zcwl Feb 9 12:29:15.024: INFO: Got endpoints: latency-svc-6zcwl [2.716151759s] Feb 9 12:29:15.171: INFO: Created: latency-svc-jk87b Feb 9 12:29:15.212: INFO: Got endpoints: latency-svc-jk87b [2.729747969s] Feb 9 12:29:15.365: INFO: Created: latency-svc-hg8pl Feb 9 12:29:15.395: INFO: Got endpoints: latency-svc-hg8pl [2.717270854s] Feb 9 12:29:15.609: INFO: Created: latency-svc-fsc6g Feb 9 12:29:15.613: INFO: Got endpoints: latency-svc-fsc6g [2.656129245s] Feb 9 12:29:15.674: INFO: Created: latency-svc-j5dtn Feb 9 12:29:15.674: INFO: Got endpoints: latency-svc-j5dtn [2.532787987s] Feb 9 12:29:15.779: INFO: Created: latency-svc-ljhwn Feb 9 12:29:15.817: INFO: Got endpoints: latency-svc-ljhwn [2.406621978s] Feb 9 12:29:15.884: INFO: Created: latency-svc-97gtg Feb 9 12:29:16.025: INFO: Got endpoints: latency-svc-97gtg [2.444239339s] Feb 9 12:29:16.049: INFO: Created: latency-svc-gqpf8 Feb 9 12:29:16.062: INFO: Got endpoints: latency-svc-gqpf8 [2.279145191s] Feb 9 12:29:16.227: INFO: Created: latency-svc-wrcll Feb 9 12:29:16.246: INFO: Got endpoints: latency-svc-wrcll [2.387777514s] Feb 9 12:29:16.310: INFO: Created: latency-svc-szn8s Feb 9 12:29:16.453: INFO: Got endpoints: latency-svc-szn8s [2.42382422s] Feb 9 12:29:16.472: INFO: Created: latency-svc-xwxtq Feb 9 12:29:16.505: INFO: Got endpoints: latency-svc-xwxtq [2.239021057s] Feb 9 12:29:16.641: INFO: Created: latency-svc-bc7zd Feb 9 12:29:16.657: INFO: Got endpoints: latency-svc-bc7zd [2.201357527s] Feb 9 12:29:16.716: INFO: Created: latency-svc-c7stg Feb 9 12:29:16.848: INFO: Got endpoints: latency-svc-c7stg [2.224759557s] Feb 9 12:29:16.893: INFO: Created: latency-svc-8q6rm Feb 9 12:29:16.930: INFO: Got endpoints: latency-svc-8q6rm [2.118181904s] Feb 9 12:29:17.146: INFO: Created: latency-svc-59zr7 Feb 9 12:29:17.353: INFO: Got endpoints: latency-svc-59zr7 [2.41114461s] Feb 9 12:29:17.385: INFO: Created: latency-svc-wtdld Feb 9 12:29:17.393: INFO: Got endpoints: latency-svc-wtdld [2.368356851s] Feb 9 12:29:17.552: INFO: Created: latency-svc-v2c67 Feb 9 12:29:17.566: INFO: Got endpoints: latency-svc-v2c67 [2.353756163s] Feb 9 12:29:17.623: INFO: Created: latency-svc-m9bd2 Feb 9 12:29:17.760: INFO: Got endpoints: latency-svc-m9bd2 [2.365094389s] Feb 9 12:29:17.789: INFO: Created: latency-svc-86zsn Feb 9 12:29:17.804: INFO: Got endpoints: latency-svc-86zsn [2.191209901s] Feb 9 12:29:17.966: INFO: Created: latency-svc-t6bj4 Feb 9 12:29:18.005: INFO: Got endpoints: latency-svc-t6bj4 [2.330762417s] Feb 9 12:29:18.139: INFO: Created: latency-svc-d8nb6 Feb 9 12:29:18.165: INFO: Got endpoints: latency-svc-d8nb6 [2.347791376s] Feb 9 12:29:18.231: INFO: Created: latency-svc-hf24h Feb 9 12:29:18.324: INFO: Got endpoints: latency-svc-hf24h [2.297808023s] Feb 9 12:29:18.487: INFO: Created: latency-svc-bt4ks Feb 9 12:29:18.784: INFO: Got endpoints: latency-svc-bt4ks [2.722069701s] Feb 9 12:29:18.881: INFO: Created: latency-svc-4cbwb Feb 9 12:29:19.029: INFO: Got endpoints: latency-svc-4cbwb [2.782041725s] Feb 9 12:29:19.342: INFO: Created: latency-svc-6xx84 Feb 9 12:29:19.399: INFO: Got endpoints: latency-svc-6xx84 [2.945755816s] Feb 9 12:29:19.903: INFO: Created: latency-svc-k9l54 Feb 9 12:29:19.935: INFO: Got endpoints: latency-svc-k9l54 [3.430268217s] Feb 9 12:29:20.147: INFO: Created: latency-svc-4wnbt Feb 9 12:29:20.164: INFO: Got endpoints: latency-svc-4wnbt [3.506295442s] Feb 9 12:29:20.216: INFO: Created: latency-svc-rd8w5 Feb 9 12:29:20.373: INFO: Got endpoints: latency-svc-rd8w5 [3.523765835s] Feb 9 12:29:20.387: INFO: Created: latency-svc-nprx8 Feb 9 12:29:20.432: INFO: Got endpoints: latency-svc-nprx8 [3.500704846s] Feb 9 12:29:20.600: INFO: Created: latency-svc-2vfct Feb 9 12:29:20.621: INFO: Got endpoints: latency-svc-2vfct [3.267254445s] Feb 9 12:29:20.663: INFO: Created: latency-svc-fm4x8 Feb 9 12:29:20.783: INFO: Got endpoints: latency-svc-fm4x8 [3.39001976s] Feb 9 12:29:20.799: INFO: Created: latency-svc-2hjhx Feb 9 12:29:20.825: INFO: Got endpoints: latency-svc-2hjhx [3.258808465s] Feb 9 12:29:20.878: INFO: Created: latency-svc-jmncg Feb 9 12:29:21.019: INFO: Got endpoints: latency-svc-jmncg [3.258006026s] Feb 9 12:29:21.049: INFO: Created: latency-svc-8c4g6 Feb 9 12:29:21.056: INFO: Got endpoints: latency-svc-8c4g6 [3.252435596s] Feb 9 12:29:21.239: INFO: Created: latency-svc-6r98q Feb 9 12:29:21.251: INFO: Got endpoints: latency-svc-6r98q [3.245549343s] Feb 9 12:29:21.329: INFO: Created: latency-svc-2mpzs Feb 9 12:29:21.487: INFO: Got endpoints: latency-svc-2mpzs [3.321136255s] Feb 9 12:29:21.521: INFO: Created: latency-svc-b78mr Feb 9 12:29:21.566: INFO: Got endpoints: latency-svc-b78mr [3.241649645s] Feb 9 12:29:21.704: INFO: Created: latency-svc-pbtg7 Feb 9 12:29:21.709: INFO: Got endpoints: latency-svc-pbtg7 [2.924552847s] Feb 9 12:29:21.914: INFO: Created: latency-svc-t5pgp Feb 9 12:29:21.914: INFO: Got endpoints: latency-svc-t5pgp [2.88471974s] Feb 9 12:29:21.989: INFO: Created: latency-svc-q6s6t Feb 9 12:29:22.071: INFO: Got endpoints: latency-svc-q6s6t [2.671516634s] Feb 9 12:29:22.154: INFO: Created: latency-svc-4pxq4 Feb 9 12:29:22.159: INFO: Got endpoints: latency-svc-4pxq4 [2.222793784s] Feb 9 12:29:22.328: INFO: Created: latency-svc-k6lnt Feb 9 12:29:22.345: INFO: Got endpoints: latency-svc-k6lnt [2.180911814s] Feb 9 12:29:22.398: INFO: Created: latency-svc-9dzls Feb 9 12:29:22.507: INFO: Got endpoints: latency-svc-9dzls [2.133825943s] Feb 9 12:29:22.571: INFO: Created: latency-svc-wnhc8 Feb 9 12:29:22.730: INFO: Got endpoints: latency-svc-wnhc8 [2.29778432s] Feb 9 12:29:22.791: INFO: Created: latency-svc-hx5q4 Feb 9 12:29:22.963: INFO: Got endpoints: latency-svc-hx5q4 [2.341284997s] Feb 9 12:29:22.993: INFO: Created: latency-svc-7bf6b Feb 9 12:29:23.015: INFO: Got endpoints: latency-svc-7bf6b [2.231598553s] Feb 9 12:29:23.196: INFO: Created: latency-svc-7ndn6 Feb 9 12:29:23.411: INFO: Got endpoints: latency-svc-7ndn6 [2.585843183s] Feb 9 12:29:23.426: INFO: Created: latency-svc-mkjwt Feb 9 12:29:23.446: INFO: Got endpoints: latency-svc-mkjwt [2.426971232s] Feb 9 12:29:23.498: INFO: Created: latency-svc-h5ctt Feb 9 12:29:23.501: INFO: Got endpoints: latency-svc-h5ctt [2.444408372s] Feb 9 12:29:23.703: INFO: Created: latency-svc-qgx6z Feb 9 12:29:23.712: INFO: Got endpoints: latency-svc-qgx6z [2.460731662s] Feb 9 12:29:23.857: INFO: Created: latency-svc-vrq27 Feb 9 12:29:23.892: INFO: Got endpoints: latency-svc-vrq27 [2.404672764s] Feb 9 12:29:24.036: INFO: Created: latency-svc-qnk45 Feb 9 12:29:24.062: INFO: Got endpoints: latency-svc-qnk45 [2.495321211s] Feb 9 12:29:24.286: INFO: Created: latency-svc-mjvgw Feb 9 12:29:24.316: INFO: Got endpoints: latency-svc-mjvgw [2.607004746s] Feb 9 12:29:24.541: INFO: Created: latency-svc-d4dl5 Feb 9 12:29:24.573: INFO: Got endpoints: latency-svc-d4dl5 [2.658585951s] Feb 9 12:29:24.721: INFO: Created: latency-svc-826zn Feb 9 12:29:24.757: INFO: Got endpoints: latency-svc-826zn [2.686386684s] Feb 9 12:29:24.925: INFO: Created: latency-svc-frdkw Feb 9 12:29:24.942: INFO: Got endpoints: latency-svc-frdkw [2.782810336s] Feb 9 12:29:24.993: INFO: Created: latency-svc-g5zf7 Feb 9 12:29:25.105: INFO: Got endpoints: latency-svc-g5zf7 [2.760066919s] Feb 9 12:29:25.126: INFO: Created: latency-svc-qzv22 Feb 9 12:29:25.144: INFO: Got endpoints: latency-svc-qzv22 [2.636207959s] Feb 9 12:29:25.195: INFO: Created: latency-svc-4sjvm Feb 9 12:29:25.302: INFO: Got endpoints: latency-svc-4sjvm [2.570620054s] Feb 9 12:29:25.321: INFO: Created: latency-svc-rcxvb Feb 9 12:29:25.371: INFO: Created: latency-svc-djb9b Feb 9 12:29:25.398: INFO: Got endpoints: latency-svc-djb9b [2.382375061s] Feb 9 12:29:25.528: INFO: Got endpoints: latency-svc-rcxvb [2.56455517s] Feb 9 12:29:25.545: INFO: Created: latency-svc-2lrtk Feb 9 12:29:25.578: INFO: Got endpoints: latency-svc-2lrtk [2.166425228s] Feb 9 12:29:25.763: INFO: Created: latency-svc-ssrnx Feb 9 12:29:25.781: INFO: Got endpoints: latency-svc-ssrnx [2.334655995s] Feb 9 12:29:25.845: INFO: Created: latency-svc-2jl6z Feb 9 12:29:25.940: INFO: Got endpoints: latency-svc-2jl6z [2.438839597s] Feb 9 12:29:26.023: INFO: Created: latency-svc-zndsk Feb 9 12:29:26.157: INFO: Got endpoints: latency-svc-zndsk [2.445149612s] Feb 9 12:29:26.160: INFO: Created: latency-svc-n4l4q Feb 9 12:29:26.179: INFO: Got endpoints: latency-svc-n4l4q [2.286645338s] Feb 9 12:29:26.237: INFO: Created: latency-svc-95djk Feb 9 12:29:26.351: INFO: Got endpoints: latency-svc-95djk [2.288344714s] Feb 9 12:29:26.366: INFO: Created: latency-svc-pgr9c Feb 9 12:29:26.374: INFO: Got endpoints: latency-svc-pgr9c [2.057411138s] Feb 9 12:29:26.441: INFO: Created: latency-svc-kp8c9 Feb 9 12:29:26.585: INFO: Got endpoints: latency-svc-kp8c9 [2.012118402s] Feb 9 12:29:26.629: INFO: Created: latency-svc-gfhzb Feb 9 12:29:26.647: INFO: Got endpoints: latency-svc-gfhzb [1.889212361s] Feb 9 12:29:26.817: INFO: Created: latency-svc-bsgr6 Feb 9 12:29:26.836: INFO: Got endpoints: latency-svc-bsgr6 [1.894278486s] Feb 9 12:29:27.033: INFO: Created: latency-svc-zcvcv Feb 9 12:29:27.083: INFO: Got endpoints: latency-svc-zcvcv [1.977051657s] Feb 9 12:29:27.220: INFO: Created: latency-svc-6vnnh Feb 9 12:29:27.245: INFO: Got endpoints: latency-svc-6vnnh [2.101482482s] Feb 9 12:29:27.315: INFO: Created: latency-svc-n7tb8 Feb 9 12:29:27.431: INFO: Got endpoints: latency-svc-n7tb8 [2.129359076s] Feb 9 12:29:27.452: INFO: Created: latency-svc-2pgr5 Feb 9 12:29:27.493: INFO: Got endpoints: latency-svc-2pgr5 [2.095153466s] Feb 9 12:29:27.699: INFO: Created: latency-svc-lpnm2 Feb 9 12:29:27.715: INFO: Got endpoints: latency-svc-lpnm2 [2.186976151s] Feb 9 12:29:27.767: INFO: Created: latency-svc-m7qgt Feb 9 12:29:27.885: INFO: Got endpoints: latency-svc-m7qgt [2.306437443s] Feb 9 12:29:27.905: INFO: Created: latency-svc-wnjvf Feb 9 12:29:27.945: INFO: Got endpoints: latency-svc-wnjvf [2.16406094s] Feb 9 12:29:28.101: INFO: Created: latency-svc-v5dmk Feb 9 12:29:28.142: INFO: Got endpoints: latency-svc-v5dmk [2.201939336s] Feb 9 12:29:28.298: INFO: Created: latency-svc-qfc56 Feb 9 12:29:28.318: INFO: Got endpoints: latency-svc-qfc56 [2.160290144s] Feb 9 12:29:28.358: INFO: Created: latency-svc-lsh5t Feb 9 12:29:28.385: INFO: Got endpoints: latency-svc-lsh5t [2.20612787s] Feb 9 12:29:28.512: INFO: Created: latency-svc-vqsfp Feb 9 12:29:28.551: INFO: Got endpoints: latency-svc-vqsfp [2.200277764s] Feb 9 12:29:28.687: INFO: Created: latency-svc-n4cfv Feb 9 12:29:28.693: INFO: Got endpoints: latency-svc-n4cfv [2.319301828s] Feb 9 12:29:28.737: INFO: Created: latency-svc-496gh Feb 9 12:29:28.758: INFO: Got endpoints: latency-svc-496gh [2.172396899s] Feb 9 12:29:28.959: INFO: Created: latency-svc-jllzd Feb 9 12:29:29.101: INFO: Got endpoints: latency-svc-jllzd [2.454047481s] Feb 9 12:29:29.122: INFO: Created: latency-svc-cbl7f Feb 9 12:29:29.167: INFO: Got endpoints: latency-svc-cbl7f [2.330354711s] Feb 9 12:29:29.268: INFO: Created: latency-svc-chhg8 Feb 9 12:29:29.300: INFO: Got endpoints: latency-svc-chhg8 [2.216775414s] Feb 9 12:29:29.377: INFO: Created: latency-svc-t4htq Feb 9 12:29:29.489: INFO: Got endpoints: latency-svc-t4htq [2.243607114s] Feb 9 12:29:29.507: INFO: Created: latency-svc-vtw9d Feb 9 12:29:29.529: INFO: Got endpoints: latency-svc-vtw9d [2.097776871s] Feb 9 12:29:29.748: INFO: Created: latency-svc-jrtgh Feb 9 12:29:29.784: INFO: Got endpoints: latency-svc-jrtgh [2.290328804s] Feb 9 12:29:29.830: INFO: Created: latency-svc-9t4zd Feb 9 12:29:29.947: INFO: Got endpoints: latency-svc-9t4zd [2.231040072s] Feb 9 12:29:29.974: INFO: Created: latency-svc-7x7dk Feb 9 12:29:30.152: INFO: Got endpoints: latency-svc-7x7dk [2.266666067s] Feb 9 12:29:30.167: INFO: Created: latency-svc-q46dl Feb 9 12:29:30.224: INFO: Got endpoints: latency-svc-q46dl [2.278098126s] Feb 9 12:29:30.383: INFO: Created: latency-svc-9glcn Feb 9 12:29:30.388: INFO: Got endpoints: latency-svc-9glcn [2.245751636s] Feb 9 12:29:30.634: INFO: Created: latency-svc-864lx Feb 9 12:29:30.682: INFO: Got endpoints: latency-svc-864lx [2.362833947s] Feb 9 12:29:30.911: INFO: Created: latency-svc-cr9lj Feb 9 12:29:30.923: INFO: Got endpoints: latency-svc-cr9lj [2.537433384s] Feb 9 12:29:31.155: INFO: Created: latency-svc-kxqzd Feb 9 12:29:31.183: INFO: Got endpoints: latency-svc-kxqzd [2.631951698s] Feb 9 12:29:32.826: INFO: Created: latency-svc-8vjsz Feb 9 12:29:33.081: INFO: Got endpoints: latency-svc-8vjsz [4.387185182s] Feb 9 12:29:33.099: INFO: Created: latency-svc-dtb7r Feb 9 12:29:33.134: INFO: Got endpoints: latency-svc-dtb7r [4.37581447s] Feb 9 12:29:33.309: INFO: Created: latency-svc-dmql9 Feb 9 12:29:33.330: INFO: Got endpoints: latency-svc-dmql9 [4.22893566s] Feb 9 12:29:33.499: INFO: Created: latency-svc-xrmbq Feb 9 12:29:33.522: INFO: Got endpoints: latency-svc-xrmbq [4.354806044s] Feb 9 12:29:33.743: INFO: Created: latency-svc-g6dgp Feb 9 12:29:33.751: INFO: Got endpoints: latency-svc-g6dgp [4.450326791s] Feb 9 12:29:33.814: INFO: Created: latency-svc-5l6np Feb 9 12:29:33.956: INFO: Got endpoints: latency-svc-5l6np [4.466982316s] Feb 9 12:29:34.049: INFO: Created: latency-svc-hbzzb Feb 9 12:29:34.203: INFO: Got endpoints: latency-svc-hbzzb [4.67303959s] Feb 9 12:29:34.272: INFO: Created: latency-svc-mzdw9 Feb 9 12:29:34.290: INFO: Got endpoints: latency-svc-mzdw9 [4.505560386s] Feb 9 12:29:34.414: INFO: Created: latency-svc-4vgmg Feb 9 12:29:34.441: INFO: Got endpoints: latency-svc-4vgmg [4.493405612s] Feb 9 12:29:34.619: INFO: Created: latency-svc-hgsk8 Feb 9 12:29:34.647: INFO: Got endpoints: latency-svc-hgsk8 [4.494804704s] Feb 9 12:29:34.805: INFO: Created: latency-svc-nczzl Feb 9 12:29:34.829: INFO: Got endpoints: latency-svc-nczzl [4.604451172s] Feb 9 12:29:35.038: INFO: Created: latency-svc-z65hm Feb 9 12:29:35.052: INFO: Got endpoints: latency-svc-z65hm [4.663429702s] Feb 9 12:29:35.094: INFO: Created: latency-svc-2gtf2 Feb 9 12:29:35.135: INFO: Got endpoints: latency-svc-2gtf2 [4.452946058s] Feb 9 12:29:35.282: INFO: Created: latency-svc-8dlfd Feb 9 12:29:35.304: INFO: Got endpoints: latency-svc-8dlfd [4.380692182s] Feb 9 12:29:35.440: INFO: Created: latency-svc-j8fmw Feb 9 12:29:35.462: INFO: Got endpoints: latency-svc-j8fmw [4.278343635s] Feb 9 12:29:35.538: INFO: Created: latency-svc-hp8dq Feb 9 12:29:35.686: INFO: Got endpoints: latency-svc-hp8dq [2.604462812s] Feb 9 12:29:35.705: INFO: Created: latency-svc-xmd6c Feb 9 12:29:35.709: INFO: Got endpoints: latency-svc-xmd6c [2.575467825s] Feb 9 12:29:35.773: INFO: Created: latency-svc-rsw2f Feb 9 12:29:35.946: INFO: Got endpoints: latency-svc-rsw2f [2.615626025s] Feb 9 12:29:35.959: INFO: Created: latency-svc-b9hz5 Feb 9 12:29:36.005: INFO: Got endpoints: latency-svc-b9hz5 [2.483166353s] Feb 9 12:29:36.150: INFO: Created: latency-svc-lr27r Feb 9 12:29:36.172: INFO: Got endpoints: latency-svc-lr27r [2.421177015s] Feb 9 12:29:36.335: INFO: Created: latency-svc-bhn2v Feb 9 12:29:36.359: INFO: Got endpoints: latency-svc-bhn2v [2.402412442s] Feb 9 12:29:36.505: INFO: Created: latency-svc-rz692 Feb 9 12:29:36.537: INFO: Got endpoints: latency-svc-rz692 [2.333954694s] Feb 9 12:29:36.769: INFO: Created: latency-svc-prqgn Feb 9 12:29:36.828: INFO: Got endpoints: latency-svc-prqgn [2.538324582s] Feb 9 12:29:36.941: INFO: Created: latency-svc-wz792 Feb 9 12:29:36.957: INFO: Got endpoints: latency-svc-wz792 [2.516335475s] Feb 9 12:29:37.114: INFO: Created: latency-svc-pnd6g Feb 9 12:29:37.145: INFO: Got endpoints: latency-svc-pnd6g [2.497721772s] Feb 9 12:29:37.203: INFO: Created: latency-svc-qckb5 Feb 9 12:29:37.338: INFO: Got endpoints: latency-svc-qckb5 [2.508923144s] Feb 9 12:29:37.390: INFO: Created: latency-svc-4lmsn Feb 9 12:29:37.656: INFO: Got endpoints: latency-svc-4lmsn [2.603932982s] Feb 9 12:29:37.674: INFO: Created: latency-svc-99rcp Feb 9 12:29:37.736: INFO: Got endpoints: latency-svc-99rcp [2.600698237s] Feb 9 12:29:37.960: INFO: Created: latency-svc-6xhpn Feb 9 12:29:38.163: INFO: Created: latency-svc-rv4pr Feb 9 12:29:38.163: INFO: Got endpoints: latency-svc-6xhpn [2.859018997s] Feb 9 12:29:38.178: INFO: Got endpoints: latency-svc-rv4pr [2.715845031s] Feb 9 12:29:38.227: INFO: Created: latency-svc-ts2kb Feb 9 12:29:38.350: INFO: Got endpoints: latency-svc-ts2kb [2.664190371s] Feb 9 12:29:38.367: INFO: Created: latency-svc-4gkhs Feb 9 12:29:38.391: INFO: Got endpoints: latency-svc-4gkhs [2.681086547s] Feb 9 12:29:38.560: INFO: Created: latency-svc-r9mgt Feb 9 12:29:38.632: INFO: Got endpoints: latency-svc-r9mgt [2.685153075s] Feb 9 12:29:38.762: INFO: Created: latency-svc-s9gp8 Feb 9 12:29:38.784: INFO: Got endpoints: latency-svc-s9gp8 [2.778252989s] Feb 9 12:29:38.920: INFO: Created: latency-svc-vcpxs Feb 9 12:29:38.939: INFO: Got endpoints: latency-svc-vcpxs [2.766826403s] Feb 9 12:29:39.710: INFO: Created: latency-svc-5htsl Feb 9 12:29:39.736: INFO: Got endpoints: latency-svc-5htsl [3.377037072s] Feb 9 12:29:40.717: INFO: Created: latency-svc-c447g Feb 9 12:29:40.735: INFO: Got endpoints: latency-svc-c447g [4.198029563s] Feb 9 12:29:41.339: INFO: Created: latency-svc-6p282 Feb 9 12:29:41.395: INFO: Got endpoints: latency-svc-6p282 [4.566815572s] Feb 9 12:29:41.678: INFO: Created: latency-svc-tg8bp Feb 9 12:29:41.962: INFO: Got endpoints: latency-svc-tg8bp [5.005114617s] Feb 9 12:29:42.560: INFO: Created: latency-svc-465r5 Feb 9 12:29:42.665: INFO: Got endpoints: latency-svc-465r5 [5.519685534s] Feb 9 12:29:42.694: INFO: Created: latency-svc-2nwnb Feb 9 12:29:42.701: INFO: Got endpoints: latency-svc-2nwnb [5.363327731s] Feb 9 12:29:42.753: INFO: Created: latency-svc-sdc5z Feb 9 12:29:42.892: INFO: Got endpoints: latency-svc-sdc5z [5.234878091s] Feb 9 12:29:42.909: INFO: Created: latency-svc-hqqrk Feb 9 12:29:42.923: INFO: Got endpoints: latency-svc-hqqrk [5.185513312s] Feb 9 12:29:42.992: INFO: Created: latency-svc-qq8kf Feb 9 12:29:43.095: INFO: Got endpoints: latency-svc-qq8kf [4.931265068s] Feb 9 12:29:43.114: INFO: Created: latency-svc-8fnc8 Feb 9 12:29:43.114: INFO: Got endpoints: latency-svc-8fnc8 [4.935371148s] Feb 9 12:29:43.114: INFO: Latencies: [231.148729ms 263.558136ms 436.832962ms 656.035089ms 707.140896ms 854.222757ms 931.532442ms 1.111643164s 1.292946001s 1.349333059s 1.497044884s 1.67768505s 1.702554505s 1.889212361s 1.894278486s 1.927747975s 1.977051657s 2.000376511s 2.012118402s 2.057411138s 2.095153466s 2.097776871s 2.098658685s 2.101482482s 2.118181904s 2.129359076s 2.133825943s 2.160290144s 2.163090175s 2.16406094s 2.166425228s 2.172396899s 2.180911814s 2.186976151s 2.191209901s 2.199710772s 2.200277764s 2.201357527s 2.201939336s 2.20612787s 2.216775414s 2.221359254s 2.222793784s 2.224759557s 2.231040072s 2.231598553s 2.239021057s 2.242342811s 2.243607114s 2.245751636s 2.256201049s 2.265374005s 2.266666067s 2.272154845s 2.278098126s 2.279145191s 2.283892311s 2.286645338s 2.288344714s 2.290328804s 2.29778432s 2.297808023s 2.306437443s 2.319301828s 2.330354711s 2.330762417s 2.333954694s 2.334655995s 2.341284997s 2.342755871s 2.347791376s 2.353756163s 2.362833947s 2.365094389s 2.368356851s 2.382375061s 2.387777514s 2.402412442s 2.404672764s 2.406621978s 2.409745961s 2.41114461s 2.412466445s 2.416065253s 2.421177015s 2.42382422s 2.426971232s 2.438839597s 2.444239339s 2.444408372s 2.445149612s 2.454047481s 2.460731662s 2.483166353s 2.495321211s 2.497721772s 2.508923144s 2.514272936s 2.516335475s 2.51638894s 2.532787987s 2.535793157s 2.537433384s 2.538324582s 2.546345215s 2.56455517s 2.570620054s 2.575467825s 2.576872894s 2.585843183s 2.5892769s 2.600698237s 2.603932982s 2.604462812s 2.607004746s 2.615626025s 2.631951698s 2.636042102s 2.636207959s 2.650788328s 2.656129245s 2.658585951s 2.664190371s 2.671516634s 2.681086547s 2.685153075s 2.686386684s 2.687081705s 2.710418903s 2.712799602s 2.715845031s 2.716151759s 2.717270854s 2.722069701s 2.729747969s 2.757385431s 2.760066919s 2.766826403s 2.778252989s 2.780912011s 2.782041725s 2.782810336s 2.790356121s 2.801287312s 2.802688547s 2.823250176s 2.841039263s 2.843116135s 2.845035441s 2.852556868s 2.859018997s 2.88471974s 2.88642425s 2.911861439s 2.924552847s 2.934466266s 2.93987992s 2.945755816s 2.959679914s 2.971336662s 3.006077125s 3.031552873s 3.038503053s 3.241649645s 3.245549343s 3.252435596s 3.258006026s 3.258808465s 3.267254445s 3.321136255s 3.377037072s 3.39001976s 3.430268217s 3.500704846s 3.506295442s 3.523765835s 4.198029563s 4.22893566s 4.278343635s 4.354806044s 4.37581447s 4.380692182s 4.387185182s 4.450326791s 4.452946058s 4.466982316s 4.493405612s 4.494804704s 4.505560386s 4.566815572s 4.604451172s 4.663429702s 4.67303959s 4.931265068s 4.935371148s 5.005114617s 5.185513312s 5.234878091s 5.363327731s 5.519685534s] Feb 9 12:29:43.114: INFO: 50 %ile: 2.532787987s Feb 9 12:29:43.115: INFO: 90 %ile: 4.37581447s Feb 9 12:29:43.115: INFO: 99 %ile: 5.363327731s Feb 9 12:29:43.115: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:29:43.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-2gkvt" for this suite. Feb 9 12:30:33.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:30:33.330: INFO: namespace: e2e-tests-svc-latency-2gkvt, resource: bindings, ignored listing per whitelist Feb 9 12:30:33.907: INFO: namespace e2e-tests-svc-latency-2gkvt deletion completed in 50.780104366s • [SLOW TEST:100.487 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:30:33.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:30:34.540: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:30:44.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ng7cm" for this suite. Feb 9 12:31:38.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:31:38.825: INFO: namespace: e2e-tests-pods-ng7cm, resource: bindings, ignored listing per whitelist Feb 9 12:31:38.914: INFO: namespace e2e-tests-pods-ng7cm deletion completed in 54.223231227s • [SLOW TEST:65.006 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:31:38.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 9 12:31:51.418: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-1e99d51a-4b38-11ea-aa78-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-lvjcx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-lvjcx/pods/pod-submit-remove-1e99d51a-4b38-11ea-aa78-0242ac110005", UID:"1ea2b2bd-4b38-11ea-a994-fa163e34d433", ResourceVersion:"21088956", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716848299, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"187147834", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nj4qp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fb8f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nj4qp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e74808), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ae4300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e74840)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e74b60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e74b68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e74b6c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848299, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848310, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848310, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848299, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0018c2a80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0018c2aa0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://83e65628572c8fb5b647f790bef5fa9fdb3e9ada54b8fe0bfc65c3ca699a8c90"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:32:02.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-lvjcx" for this suite. Feb 9 12:32:08.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:32:08.768: INFO: namespace: e2e-tests-pods-lvjcx, resource: bindings, ignored listing per whitelist Feb 9 12:32:08.876: INFO: namespace e2e-tests-pods-lvjcx deletion completed in 6.237886715s • [SLOW TEST:29.961 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:32:08.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 9 12:32:19.189: INFO: Pod pod-hostip-306e1b12-4b38-11ea-aa78-0242ac110005 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:32:19.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-29j5m" for this suite. Feb 9 12:32:41.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:32:41.417: INFO: namespace: e2e-tests-pods-29j5m, resource: bindings, ignored listing per whitelist Feb 9 12:32:41.478: INFO: namespace e2e-tests-pods-29j5m deletion completed in 22.279638666s • [SLOW TEST:32.602 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:32:41.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:32:41.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bgg8j" for this suite. Feb 9 12:33:05.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:33:05.947: INFO: namespace: e2e-tests-pods-bgg8j, resource: bindings, ignored listing per whitelist Feb 9 12:33:06.000: INFO: namespace e2e-tests-pods-bgg8j deletion completed in 24.288276526s • [SLOW TEST:24.521 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:33:06.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 9 12:33:06.293: INFO: Waiting up to 5m0s for pod "pod-5277de08-4b38-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-rxrkr" to be "success or failure" Feb 9 12:33:06.313: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.094184ms Feb 9 12:33:08.478: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1851938s Feb 9 12:33:10.505: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21246279s Feb 9 12:33:12.899: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.605625253s Feb 9 12:33:14.913: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.620488082s Feb 9 12:33:16.933: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639665445s STEP: Saw pod success Feb 9 12:33:16.933: INFO: Pod "pod-5277de08-4b38-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:33:16.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5277de08-4b38-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 12:33:17.930: INFO: Waiting for pod pod-5277de08-4b38-11ea-aa78-0242ac110005 to disappear Feb 9 12:33:18.237: INFO: Pod pod-5277de08-4b38-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:33:18.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rxrkr" for this suite. Feb 9 12:33:24.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:33:24.423: INFO: namespace: e2e-tests-emptydir-rxrkr, resource: bindings, ignored listing per whitelist Feb 9 12:33:24.530: INFO: namespace e2e-tests-emptydir-rxrkr deletion completed in 6.26537927s • [SLOW TEST:18.529 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:33:24.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 9 12:33:24.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:26.804: INFO: stderr: "" Feb 9 12:33:26.804: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 12:33:26.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:27.098: INFO: stderr: "" Feb 9 12:33:27.098: INFO: stdout: "update-demo-nautilus-9ffnp update-demo-nautilus-nlz9v " Feb 9 12:33:27.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ffnp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:27.248: INFO: stderr: "" Feb 9 12:33:27.248: INFO: stdout: "" Feb 9 12:33:27.248: INFO: update-demo-nautilus-9ffnp is created but not running Feb 9 12:33:32.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:32.471: INFO: stderr: "" Feb 9 12:33:32.471: INFO: stdout: "update-demo-nautilus-9ffnp update-demo-nautilus-nlz9v " Feb 9 12:33:32.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ffnp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:32.632: INFO: stderr: "" Feb 9 12:33:32.632: INFO: stdout: "" Feb 9 12:33:32.632: INFO: update-demo-nautilus-9ffnp is created but not running Feb 9 12:33:37.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:37.801: INFO: stderr: "" Feb 9 12:33:37.801: INFO: stdout: "update-demo-nautilus-9ffnp update-demo-nautilus-nlz9v " Feb 9 12:33:37.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ffnp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:38.090: INFO: stderr: "" Feb 9 12:33:38.090: INFO: stdout: "" Feb 9 12:33:38.090: INFO: update-demo-nautilus-9ffnp is created but not running Feb 9 12:33:43.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:43.220: INFO: stderr: "" Feb 9 12:33:43.221: INFO: stdout: "update-demo-nautilus-9ffnp update-demo-nautilus-nlz9v " Feb 9 12:33:43.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ffnp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:43.361: INFO: stderr: "" Feb 9 12:33:43.361: INFO: stdout: "true" Feb 9 12:33:43.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9ffnp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:43.500: INFO: stderr: "" Feb 9 12:33:43.500: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 12:33:43.500: INFO: validating pod update-demo-nautilus-9ffnp Feb 9 12:33:43.524: INFO: got data: { "image": "nautilus.jpg" } Feb 9 12:33:43.524: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 12:33:43.524: INFO: update-demo-nautilus-9ffnp is verified up and running Feb 9 12:33:43.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nlz9v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:43.619: INFO: stderr: "" Feb 9 12:33:43.619: INFO: stdout: "true" Feb 9 12:33:43.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nlz9v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:33:43.767: INFO: stderr: "" Feb 9 12:33:43.767: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 12:33:43.767: INFO: validating pod update-demo-nautilus-nlz9v Feb 9 12:33:43.793: INFO: got data: { "image": "nautilus.jpg" } Feb 9 12:33:43.793: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 12:33:43.793: INFO: update-demo-nautilus-nlz9v is verified up and running STEP: rolling-update to new replication controller Feb 9 12:33:43.800: INFO: scanned /root for discovery docs: Feb 9 12:33:43.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:18.913: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 9 12:34:18.914: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 12:34:18.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:19.101: INFO: stderr: "" Feb 9 12:34:19.101: INFO: stdout: "update-demo-kitten-8ptgd update-demo-kitten-t69kp " Feb 9 12:34:19.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8ptgd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:19.222: INFO: stderr: "" Feb 9 12:34:19.222: INFO: stdout: "true" Feb 9 12:34:19.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8ptgd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:19.315: INFO: stderr: "" Feb 9 12:34:19.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 9 12:34:19.315: INFO: validating pod update-demo-kitten-8ptgd Feb 9 12:34:19.361: INFO: got data: { "image": "kitten.jpg" } Feb 9 12:34:19.361: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 9 12:34:19.361: INFO: update-demo-kitten-8ptgd is verified up and running Feb 9 12:34:19.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t69kp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:19.473: INFO: stderr: "" Feb 9 12:34:19.473: INFO: stdout: "true" Feb 9 12:34:19.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t69kp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6hpd' Feb 9 12:34:19.583: INFO: stderr: "" Feb 9 12:34:19.584: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 9 12:34:19.584: INFO: validating pod update-demo-kitten-t69kp Feb 9 12:34:19.595: INFO: got data: { "image": "kitten.jpg" } Feb 9 12:34:19.595: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 9 12:34:19.595: INFO: update-demo-kitten-t69kp is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:34:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s6hpd" for this suite. Feb 9 12:34:43.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:34:44.000: INFO: namespace: e2e-tests-kubectl-s6hpd, resource: bindings, ignored listing per whitelist Feb 9 12:34:44.100: INFO: namespace e2e-tests-kubectl-s6hpd deletion completed in 24.499479319s • [SLOW TEST:79.570 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:34:44.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8cf46f11-4b38-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:34:44.440: INFO: Waiting up to 5m0s for pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-v4qsp" to be "success or failure" Feb 9 12:34:44.467: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.471624ms Feb 9 12:34:46.493: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053024604s Feb 9 12:34:48.512: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071936913s Feb 9 12:34:50.667: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227488476s Feb 9 12:34:52.686: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2464137s Feb 9 12:34:54.701: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.261049237s STEP: Saw pod success Feb 9 12:34:54.701: INFO: Pod "pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:34:54.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 9 12:34:54.950: INFO: Waiting for pod pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005 to disappear Feb 9 12:34:54.960: INFO: Pod pod-secrets-8d00b156-4b38-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:34:54.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v4qsp" for this suite. Feb 9 12:35:00.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:35:01.073: INFO: namespace: e2e-tests-secrets-v4qsp, resource: bindings, ignored listing per whitelist Feb 9 12:35:01.129: INFO: namespace e2e-tests-secrets-v4qsp deletion completed in 6.16127651s • [SLOW TEST:17.029 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:35:01.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 9 12:35:01.304: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:35:24.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c4gjh" for this suite. Feb 9 12:35:48.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:35:48.240: INFO: namespace: e2e-tests-init-container-c4gjh, resource: bindings, ignored listing per whitelist Feb 9 12:35:48.352: INFO: namespace e2e-tests-init-container-c4gjh deletion completed in 24.188249946s • [SLOW TEST:47.222 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:35:48.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 9 12:35:48.577: INFO: Waiting up to 5m0s for pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-vfw5c" to be "success or failure" Feb 9 12:35:48.583: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.771469ms Feb 9 12:35:50.608: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029935336s Feb 9 12:35:52.639: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061186707s Feb 9 12:35:54.664: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086022208s Feb 9 12:35:56.688: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110720478s Feb 9 12:35:58.715: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137839741s Feb 9 12:36:00.760: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.181948184s STEP: Saw pod success Feb 9 12:36:00.760: INFO: Pod "pod-b33bfd88-4b38-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:36:00.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b33bfd88-4b38-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 12:36:01.669: INFO: Waiting for pod pod-b33bfd88-4b38-11ea-aa78-0242ac110005 to disappear Feb 9 12:36:02.147: INFO: Pod pod-b33bfd88-4b38-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:36:02.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vfw5c" for this suite. Feb 9 12:36:08.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:36:08.439: INFO: namespace: e2e-tests-emptydir-vfw5c, resource: bindings, ignored listing per whitelist Feb 9 12:36:08.518: INFO: namespace e2e-tests-emptydir-vfw5c deletion completed in 6.349848274s • [SLOW TEST:20.166 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:36:08.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:36:20.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tvpvv" for this suite. Feb 9 12:36:29.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:36:29.122: INFO: namespace: e2e-tests-kubelet-test-tvpvv, resource: bindings, ignored listing per whitelist Feb 9 12:36:29.208: INFO: namespace e2e-tests-kubelet-test-tvpvv deletion completed in 8.205009109s • [SLOW TEST:20.689 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:36:29.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cb9066fe-4b38-11ea-aa78-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-cb9068e8-4b38-11ea-aa78-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cb9066fe-4b38-11ea-aa78-0242ac110005 STEP: Updating configmap cm-test-opt-upd-cb9068e8-4b38-11ea-aa78-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-cb906938-4b38-11ea-aa78-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:38:18.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sw286" for this suite. Feb 9 12:38:42.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:38:42.764: INFO: namespace: e2e-tests-projected-sw286, resource: bindings, ignored listing per whitelist Feb 9 12:38:42.782: INFO: namespace e2e-tests-projected-sw286 deletion completed in 24.142226108s • [SLOW TEST:133.573 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:38:42.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 9 12:38:43.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-qxjjd" to be "success or failure" Feb 9 12:38:43.044: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.330663ms Feb 9 12:38:45.860: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.830948597s Feb 9 12:38:47.907: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87746775s Feb 9 12:38:49.922: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.892442788s Feb 9 12:38:52.413: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.383352548s Feb 9 12:38:54.479: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.449551516s Feb 9 12:38:56.789: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.75951963s STEP: Saw pod success Feb 9 12:38:56.789: INFO: Pod "downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:38:57.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005 container client-container: STEP: delete the pod Feb 9 12:38:57.801: INFO: Waiting for pod downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005 to disappear Feb 9 12:38:57.869: INFO: Pod downwardapi-volume-1b3955d8-4b39-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:38:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qxjjd" for this suite. Feb 9 12:39:03.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:39:04.125: INFO: namespace: e2e-tests-projected-qxjjd, resource: bindings, ignored listing per whitelist Feb 9 12:39:04.189: INFO: namespace e2e-tests-projected-qxjjd deletion completed in 6.306405963s • [SLOW TEST:21.406 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:39:04.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 9 12:39:04.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:04.755: INFO: stderr: "" Feb 9 12:39:04.755: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 9 12:39:04.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:05.060: INFO: stderr: "" Feb 9 12:39:05.060: INFO: stdout: "update-demo-nautilus-fxn27 update-demo-nautilus-gdlg9 " Feb 9 12:39:05.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fxn27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:05.201: INFO: stderr: "" Feb 9 12:39:05.201: INFO: stdout: "" Feb 9 12:39:05.201: INFO: update-demo-nautilus-fxn27 is created but not running Feb 9 12:39:10.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:10.338: INFO: stderr: "" Feb 9 12:39:10.339: INFO: stdout: "update-demo-nautilus-fxn27 update-demo-nautilus-gdlg9 " Feb 9 12:39:10.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fxn27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:10.458: INFO: stderr: "" Feb 9 12:39:10.458: INFO: stdout: "" Feb 9 12:39:10.459: INFO: update-demo-nautilus-fxn27 is created but not running Feb 9 12:39:15.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:15.651: INFO: stderr: "" Feb 9 12:39:15.651: INFO: stdout: "update-demo-nautilus-fxn27 update-demo-nautilus-gdlg9 " Feb 9 12:39:15.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fxn27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:15.794: INFO: stderr: "" Feb 9 12:39:15.794: INFO: stdout: "" Feb 9 12:39:15.794: INFO: update-demo-nautilus-fxn27 is created but not running Feb 9 12:39:20.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:20.957: INFO: stderr: "" Feb 9 12:39:20.957: INFO: stdout: "update-demo-nautilus-fxn27 update-demo-nautilus-gdlg9 " Feb 9 12:39:20.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fxn27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.073: INFO: stderr: "" Feb 9 12:39:21.073: INFO: stdout: "true" Feb 9 12:39:21.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fxn27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.192: INFO: stderr: "" Feb 9 12:39:21.192: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 12:39:21.193: INFO: validating pod update-demo-nautilus-fxn27 Feb 9 12:39:21.221: INFO: got data: { "image": "nautilus.jpg" } Feb 9 12:39:21.221: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 12:39:21.222: INFO: update-demo-nautilus-fxn27 is verified up and running Feb 9 12:39:21.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdlg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.337: INFO: stderr: "" Feb 9 12:39:21.337: INFO: stdout: "true" Feb 9 12:39:21.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gdlg9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.514: INFO: stderr: "" Feb 9 12:39:21.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 9 12:39:21.515: INFO: validating pod update-demo-nautilus-gdlg9 Feb 9 12:39:21.600: INFO: got data: { "image": "nautilus.jpg" } Feb 9 12:39:21.600: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 9 12:39:21.600: INFO: update-demo-nautilus-gdlg9 is verified up and running STEP: using delete to clean up resources Feb 9 12:39:21.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.740: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 9 12:39:21.740: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 9 12:39:21.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-t2gfc' Feb 9 12:39:21.936: INFO: stderr: "No resources found.\n" Feb 9 12:39:21.936: INFO: stdout: "" Feb 9 12:39:21.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-t2gfc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 9 12:39:22.093: INFO: stderr: "" Feb 9 12:39:22.093: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:39:22.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t2gfc" for this suite. Feb 9 12:39:46.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:39:46.317: INFO: namespace: e2e-tests-kubectl-t2gfc, resource: bindings, ignored listing per whitelist Feb 9 12:39:46.333: INFO: namespace e2e-tests-kubectl-t2gfc deletion completed in 24.21447824s • [SLOW TEST:42.144 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:39:46.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-56n7 STEP: Creating a pod to test atomic-volume-subpath Feb 9 12:39:46.609: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-56n7" in namespace "e2e-tests-subpath-c5fcb" to be "success or failure" Feb 9 12:39:46.619: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.813398ms Feb 9 12:39:49.175: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.56573013s Feb 9 12:39:51.192: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582871975s Feb 9 12:39:53.827: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.217926148s Feb 9 12:39:55.850: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.241242744s Feb 9 12:39:58.084: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.474853458s Feb 9 12:40:00.109: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.499869436s Feb 9 12:40:02.144: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.53521579s Feb 9 12:40:04.162: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.552831951s Feb 9 12:40:06.189: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 19.580577743s Feb 9 12:40:08.244: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 21.63475748s Feb 9 12:40:10.263: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 23.654727003s Feb 9 12:40:12.286: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 25.677382191s Feb 9 12:40:14.308: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 27.698993271s Feb 9 12:40:16.330: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 29.721616767s Feb 9 12:40:18.366: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 31.756893736s Feb 9 12:40:20.388: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 33.778906317s Feb 9 12:40:22.418: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Running", Reason="", readiness=false. Elapsed: 35.809026582s Feb 9 12:40:24.519: INFO: Pod "pod-subpath-test-downwardapi-56n7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.910362897s STEP: Saw pod success Feb 9 12:40:24.519: INFO: Pod "pod-subpath-test-downwardapi-56n7" satisfied condition "success or failure" Feb 9 12:40:24.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-56n7 container test-container-subpath-downwardapi-56n7: STEP: delete the pod Feb 9 12:40:25.396: INFO: Waiting for pod pod-subpath-test-downwardapi-56n7 to disappear Feb 9 12:40:25.412: INFO: Pod pod-subpath-test-downwardapi-56n7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-56n7 Feb 9 12:40:25.412: INFO: Deleting pod "pod-subpath-test-downwardapi-56n7" in namespace "e2e-tests-subpath-c5fcb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:40:25.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c5fcb" for this suite. Feb 9 12:40:33.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:40:33.550: INFO: namespace: e2e-tests-subpath-c5fcb, resource: bindings, ignored listing per whitelist Feb 9 12:40:33.838: INFO: namespace e2e-tests-subpath-c5fcb deletion completed in 8.413742193s • [SLOW TEST:47.505 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:40:33.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5d636361-4b39-11ea-aa78-0242ac110005 STEP: Creating a pod to test consume secrets Feb 9 12:40:34.206: INFO: Waiting up to 5m0s for pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-8z5cj" to be "success or failure" Feb 9 12:40:34.209: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.927943ms Feb 9 12:40:36.329: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123037574s Feb 9 12:40:38.701: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494992772s Feb 9 12:40:40.707: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501018633s Feb 9 12:40:42.721: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515110591s Feb 9 12:40:44.752: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546128341s STEP: Saw pod success Feb 9 12:40:44.752: INFO: Pod "pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:40:44.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005 container secret-env-test: STEP: delete the pod Feb 9 12:40:44.960: INFO: Waiting for pod pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005 to disappear Feb 9 12:40:44.978: INFO: Pod pod-secrets-5d7c2a2f-4b39-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:40:44.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8z5cj" for this suite. Feb 9 12:40:51.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:40:51.553: INFO: namespace: e2e-tests-secrets-8z5cj, resource: bindings, ignored listing per whitelist Feb 9 12:40:51.564: INFO: namespace e2e-tests-secrets-8z5cj deletion completed in 6.571885188s • [SLOW TEST:17.724 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:40:51.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 9 12:40:51.749: INFO: Waiting up to 5m0s for pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-tn48b" to be "success or failure" Feb 9 12:40:51.758: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.939962ms Feb 9 12:40:53.779: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029898317s Feb 9 12:40:55.798: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048613947s Feb 9 12:40:58.522: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77244925s Feb 9 12:41:00.547: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797696996s Feb 9 12:41:02.600: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.850818169s Feb 9 12:41:04.661: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.911280247s STEP: Saw pod success Feb 9 12:41:04.661: INFO: Pod "pod-67ec5e39-4b39-11ea-aa78-0242ac110005" satisfied condition "success or failure" Feb 9 12:41:04.667: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-67ec5e39-4b39-11ea-aa78-0242ac110005 container test-container: STEP: delete the pod Feb 9 12:41:04.903: INFO: Waiting for pod pod-67ec5e39-4b39-11ea-aa78-0242ac110005 to disappear Feb 9 12:41:04.912: INFO: Pod pod-67ec5e39-4b39-11ea-aa78-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:41:04.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tn48b" for this suite. Feb 9 12:41:11.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:41:11.201: INFO: namespace: e2e-tests-emptydir-tn48b, resource: bindings, ignored listing per whitelist Feb 9 12:41:11.275: INFO: namespace e2e-tests-emptydir-tn48b deletion completed in 6.274307491s • [SLOW TEST:19.710 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:41:11.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 9 12:41:11.500: INFO: PodSpec: initContainers in spec.initContainers Feb 9 12:42:23.754: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-73b9eae3-4b39-11ea-aa78-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-9qdmc", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-9qdmc/pods/pod-init-73b9eae3-4b39-11ea-aa78-0242ac110005", UID:"73c643b9-4b39-11ea-a994-fa163e34d433", ResourceVersion:"21090253", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716848871, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"500512172"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r4cvx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001881cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r4cvx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r4cvx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r4cvx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d97b28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dae960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d97ba0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d97bc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d97bc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d97bcc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848871, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848871, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0017558c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001482540)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0014825b0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ef0814e08f102426352d218b9d1e8e3b6fde00ef4076a43a36a5c6e3edf20055"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001755900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0017558e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 9 12:42:23.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9qdmc" for this suite. Feb 9 12:42:47.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:42:48.081: INFO: namespace: e2e-tests-init-container-9qdmc, resource: bindings, ignored listing per whitelist Feb 9 12:42:48.120: INFO: namespace e2e-tests-init-container-9qdmc deletion completed in 24.325941337s • [SLOW TEST:96.845 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 9 12:42:48.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 9 12:42:48.342: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.845031ms)
Feb  9 12:42:48.347: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.405001ms)
Feb  9 12:42:48.352: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.398663ms)
Feb  9 12:42:48.358: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.271305ms)
Feb  9 12:42:48.363: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.245763ms)
Feb  9 12:42:48.369: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.409721ms)
Feb  9 12:42:48.372: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.489721ms)
Feb  9 12:42:48.376: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.993179ms)
Feb  9 12:42:48.569: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 192.255683ms)
Feb  9 12:42:48.579: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.57584ms)
Feb  9 12:42:48.584: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.694776ms)
Feb  9 12:42:48.589: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.944528ms)
Feb  9 12:42:48.593: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.114663ms)
Feb  9 12:42:48.597: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.146623ms)
Feb  9 12:42:48.601: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.489156ms)
Feb  9 12:42:48.604: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.420641ms)
Feb  9 12:42:48.608: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.877705ms)
Feb  9 12:42:48.611: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.12077ms)
Feb  9 12:42:48.615: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.427229ms)
Feb  9 12:42:48.618: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.064935ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:42:48.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-8886g" for this suite.
Feb  9 12:42:54.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:42:54.756: INFO: namespace: e2e-tests-proxy-8886g, resource: bindings, ignored listing per whitelist
Feb  9 12:42:54.796: INFO: namespace e2e-tests-proxy-8886g deletion completed in 6.174584726s

• [SLOW TEST:6.675 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:42:54.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb  9 12:42:54.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-tl8cl run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  9 12:43:05.531: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0209 12:43:03.775682    3675 log.go:172] (0xc00070c160) (0xc000634640) Create stream\nI0209 12:43:03.776100    3675 log.go:172] (0xc00070c160) (0xc000634640) Stream added, broadcasting: 1\nI0209 12:43:03.785911    3675 log.go:172] (0xc00070c160) Reply frame received for 1\nI0209 12:43:03.785975    3675 log.go:172] (0xc00070c160) (0xc00068d680) Create stream\nI0209 12:43:03.785980    3675 log.go:172] (0xc00070c160) (0xc00068d680) Stream added, broadcasting: 3\nI0209 12:43:03.787188    3675 log.go:172] (0xc00070c160) Reply frame received for 3\nI0209 12:43:03.787283    3675 log.go:172] (0xc00070c160) (0xc0007f8000) Create stream\nI0209 12:43:03.787297    3675 log.go:172] (0xc00070c160) (0xc0007f8000) Stream added, broadcasting: 5\nI0209 12:43:03.788411    3675 log.go:172] (0xc00070c160) Reply frame received for 5\nI0209 12:43:03.788425    3675 log.go:172] (0xc00070c160) (0xc00068d720) Create stream\nI0209 12:43:03.788429    3675 log.go:172] (0xc00070c160) (0xc00068d720) Stream added, broadcasting: 7\nI0209 12:43:03.789659    3675 log.go:172] (0xc00070c160) Reply frame received for 7\nI0209 12:43:03.790002    3675 log.go:172] (0xc00068d680) (3) Writing data frame\nI0209 12:43:03.790208    3675 log.go:172] (0xc00068d680) (3) Writing data frame\nI0209 12:43:03.799243    3675 log.go:172] (0xc00070c160) Data frame received for 5\nI0209 12:43:03.799255    3675 log.go:172] (0xc0007f8000) (5) Data frame handling\nI0209 12:43:03.799268    3675 log.go:172] (0xc0007f8000) (5) Data frame sent\nI0209 12:43:03.806482    3675 log.go:172] (0xc00070c160) Data frame received for 5\nI0209 12:43:03.806496    3675 log.go:172] (0xc0007f8000) (5) Data frame handling\nI0209 12:43:03.806506    3675 log.go:172] (0xc0007f8000) (5) Data frame sent\nI0209 12:43:05.466166    3675 log.go:172] (0xc00070c160) Data frame received for 1\nI0209 12:43:05.466286    3675 log.go:172] (0xc000634640) (1) Data frame handling\nI0209 12:43:05.466310    3675 log.go:172] (0xc000634640) (1) Data frame sent\nI0209 12:43:05.466436    3675 log.go:172] (0xc00070c160) (0xc000634640) Stream removed, broadcasting: 1\nI0209 12:43:05.468681    3675 log.go:172] (0xc00070c160) (0xc00068d680) Stream removed, broadcasting: 3\nI0209 12:43:05.469943    3675 log.go:172] (0xc00070c160) (0xc0007f8000) Stream removed, broadcasting: 5\nI0209 12:43:05.471098    3675 log.go:172] (0xc00070c160) (0xc00068d720) Stream removed, broadcasting: 7\nI0209 12:43:05.471154    3675 log.go:172] (0xc00070c160) (0xc000634640) Stream removed, broadcasting: 1\nI0209 12:43:05.471170    3675 log.go:172] (0xc00070c160) (0xc00068d680) Stream removed, broadcasting: 3\nI0209 12:43:05.471180    3675 log.go:172] (0xc00070c160) (0xc0007f8000) Stream removed, broadcasting: 5\nI0209 12:43:05.471192    3675 log.go:172] (0xc00070c160) (0xc00068d720) Stream removed, broadcasting: 7\n"
Feb  9 12:43:05.531: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:43:08.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tl8cl" for this suite.
Feb  9 12:43:14.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:43:14.222: INFO: namespace: e2e-tests-kubectl-tl8cl, resource: bindings, ignored listing per whitelist
Feb  9 12:43:14.377: INFO: namespace e2e-tests-kubectl-tl8cl deletion completed in 6.243636774s

• [SLOW TEST:19.580 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:43:14.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  9 12:43:14.724: INFO: Creating deployment "test-recreate-deployment"
Feb  9 12:43:14.760: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  9 12:43:14.772: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb  9 12:43:16.811: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  9 12:43:16.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848995, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 12:43:18.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848995, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 12:43:21.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848995, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 12:43:22.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848995, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 12:43:24.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848995, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716848994, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 12:43:26.955: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  9 12:43:26.989: INFO: Updating deployment test-recreate-deployment
Feb  9 12:43:26.989: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb  9 12:43:27.818: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-7z5xg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7z5xg/deployments/test-recreate-deployment,UID:bd2d6470-4b39-11ea-a994-fa163e34d433,ResourceVersion:21090429,Generation:2,CreationTimestamp:2020-02-09 12:43:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-09 12:43:27 +0000 UTC 2020-02-09 12:43:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-09 12:43:27 +0000 UTC 2020-02-09 12:43:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  9 12:43:27.840: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-7z5xg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7z5xg/replicasets/test-recreate-deployment-589c4bfd,UID:c4b55f10-4b39-11ea-a994-fa163e34d433,ResourceVersion:21090426,Generation:1,CreationTimestamp:2020-02-09 12:43:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bd2d6470-4b39-11ea-a994-fa163e34d433 0xc001ce518f 0xc001ce51a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 12:43:27.840: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  9 12:43:27.840: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-7z5xg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7z5xg/replicasets/test-recreate-deployment-5bf7f65dc,UID:bd3498c9-4b39-11ea-a994-fa163e34d433,ResourceVersion:21090417,Generation:2,CreationTimestamp:2020-02-09 12:43:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bd2d6470-4b39-11ea-a994-fa163e34d433 0xc001ce5260 0xc001ce5261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 12:43:27.857: INFO: Pod "test-recreate-deployment-589c4bfd-pnxsw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-pnxsw,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-7z5xg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7z5xg/pods/test-recreate-deployment-589c4bfd-pnxsw,UID:c4c1e881-4b39-11ea-a994-fa163e34d433,ResourceVersion:21090430,Generation:0,CreationTimestamp:2020-02-09 12:43:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd c4b55f10-4b39-11ea-a994-fa163e34d433 0xc001dca55f 0xc001dca570}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tfbn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tfbn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tfbn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dca5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dca5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:43:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:43:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:43:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 12:43:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-09 12:43:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:43:27.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7z5xg" for this suite.
Feb  9 12:43:36.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:43:37.106: INFO: namespace: e2e-tests-deployment-7z5xg, resource: bindings, ignored listing per whitelist
Feb  9 12:43:37.171: INFO: namespace e2e-tests-deployment-7z5xg deletion completed in 8.251380525s

• [SLOW TEST:22.794 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:43:37.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4vtkd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 12:43:37.373: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 12:44:19.773: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-4vtkd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:44:19.773: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:44:19.906378       8 log.go:172] (0xc0018066e0) (0xc0014cd220) Create stream
I0209 12:44:19.906489       8 log.go:172] (0xc0018066e0) (0xc0014cd220) Stream added, broadcasting: 1
I0209 12:44:19.913121       8 log.go:172] (0xc0018066e0) Reply frame received for 1
I0209 12:44:19.913239       8 log.go:172] (0xc0018066e0) (0xc0024152c0) Create stream
I0209 12:44:19.913255       8 log.go:172] (0xc0018066e0) (0xc0024152c0) Stream added, broadcasting: 3
I0209 12:44:19.914525       8 log.go:172] (0xc0018066e0) Reply frame received for 3
I0209 12:44:19.914575       8 log.go:172] (0xc0018066e0) (0xc00146a640) Create stream
I0209 12:44:19.914594       8 log.go:172] (0xc0018066e0) (0xc00146a640) Stream added, broadcasting: 5
I0209 12:44:19.915660       8 log.go:172] (0xc0018066e0) Reply frame received for 5
I0209 12:44:20.256490       8 log.go:172] (0xc0018066e0) Data frame received for 3
I0209 12:44:20.256601       8 log.go:172] (0xc0024152c0) (3) Data frame handling
I0209 12:44:20.256662       8 log.go:172] (0xc0024152c0) (3) Data frame sent
I0209 12:44:20.403000       8 log.go:172] (0xc0018066e0) (0xc0024152c0) Stream removed, broadcasting: 3
I0209 12:44:20.403263       8 log.go:172] (0xc0018066e0) Data frame received for 1
I0209 12:44:20.403314       8 log.go:172] (0xc0014cd220) (1) Data frame handling
I0209 12:44:20.403382       8 log.go:172] (0xc0014cd220) (1) Data frame sent
I0209 12:44:20.403430       8 log.go:172] (0xc0018066e0) (0xc00146a640) Stream removed, broadcasting: 5
I0209 12:44:20.403486       8 log.go:172] (0xc0018066e0) (0xc0014cd220) Stream removed, broadcasting: 1
I0209 12:44:20.403517       8 log.go:172] (0xc0018066e0) Go away received
I0209 12:44:20.403828       8 log.go:172] (0xc0018066e0) (0xc0014cd220) Stream removed, broadcasting: 1
I0209 12:44:20.403876       8 log.go:172] (0xc0018066e0) (0xc0024152c0) Stream removed, broadcasting: 3
I0209 12:44:20.403908       8 log.go:172] (0xc0018066e0) (0xc00146a640) Stream removed, broadcasting: 5
Feb  9 12:44:20.404: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:44:20.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-4vtkd" for this suite.
Feb  9 12:44:44.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:44:44.775: INFO: namespace: e2e-tests-pod-network-test-4vtkd, resource: bindings, ignored listing per whitelist
Feb  9 12:44:44.775: INFO: namespace e2e-tests-pod-network-test-4vtkd deletion completed in 24.35495693s

• [SLOW TEST:67.604 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:44:44.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  9 12:44:45.005: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:44:46.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-mx6cn" for this suite.
Feb  9 12:44:52.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:44:52.335: INFO: namespace: e2e-tests-custom-resource-definition-mx6cn, resource: bindings, ignored listing per whitelist
Feb  9 12:44:52.558: INFO: namespace e2e-tests-custom-resource-definition-mx6cn deletion completed in 6.387205706s

• [SLOW TEST:7.782 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:44:52.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 12:44:52.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:44:54.643: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 12:44:54.643: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  9 12:44:54.695: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  9 12:44:54.725: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  9 12:44:54.740: INFO: scanned /root for discovery docs: 
Feb  9 12:44:54.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:22.652: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  9 12:45:22.652: INFO: stdout: "Created e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41\nScaling up e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  9 12:45:22.652: INFO: stdout: "Created e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41\nScaling up e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  9 12:45:22.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:22.839: INFO: stderr: ""
Feb  9 12:45:22.839: INFO: stdout: "e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41-5l7wj e2e-test-nginx-rc-hg2qm "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  9 12:45:27.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:27.996: INFO: stderr: ""
Feb  9 12:45:27.996: INFO: stdout: "e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41-5l7wj "
Feb  9 12:45:27.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41-5l7wj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:28.111: INFO: stderr: ""
Feb  9 12:45:28.111: INFO: stdout: "true"
Feb  9 12:45:28.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41-5l7wj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:28.227: INFO: stderr: ""
Feb  9 12:45:28.227: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  9 12:45:28.227: INFO: e2e-test-nginx-rc-7da73f9837ccfe7478870658b9545a41-5l7wj is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb  9 12:45:28.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pjm59'
Feb  9 12:45:28.365: INFO: stderr: ""
Feb  9 12:45:28.365: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:45:28.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pjm59" for this suite.
Feb  9 12:45:52.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:45:52.760: INFO: namespace: e2e-tests-kubectl-pjm59, resource: bindings, ignored listing per whitelist
Feb  9 12:45:52.829: INFO: namespace e2e-tests-kubectl-pjm59 deletion completed in 24.454346496s

• [SLOW TEST:60.268 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:45:52.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  9 12:45:53.296: INFO: Number of nodes with available pods: 0
Feb  9 12:45:53.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:45:54.316: INFO: Number of nodes with available pods: 0
Feb  9 12:45:54.316: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:45:55.534: INFO: Number of nodes with available pods: 0
Feb  9 12:45:55.534: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:45:56.317: INFO: Number of nodes with available pods: 0
Feb  9 12:45:56.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:45:57.315: INFO: Number of nodes with available pods: 0
Feb  9 12:45:57.315: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:45:58.364: INFO: Number of nodes with available pods: 0
Feb  9 12:45:58.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:00.565: INFO: Number of nodes with available pods: 0
Feb  9 12:46:00.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:01.481: INFO: Number of nodes with available pods: 0
Feb  9 12:46:01.481: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:02.528: INFO: Number of nodes with available pods: 0
Feb  9 12:46:02.528: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:03.416: INFO: Number of nodes with available pods: 0
Feb  9 12:46:03.416: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:04.311: INFO: Number of nodes with available pods: 0
Feb  9 12:46:04.311: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:05.320: INFO: Number of nodes with available pods: 1
Feb  9 12:46:05.320: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  9 12:46:05.384: INFO: Number of nodes with available pods: 0
Feb  9 12:46:05.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:06.406: INFO: Number of nodes with available pods: 0
Feb  9 12:46:06.406: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:07.884: INFO: Number of nodes with available pods: 0
Feb  9 12:46:07.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:08.418: INFO: Number of nodes with available pods: 0
Feb  9 12:46:08.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:09.412: INFO: Number of nodes with available pods: 0
Feb  9 12:46:09.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:10.414: INFO: Number of nodes with available pods: 0
Feb  9 12:46:10.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:11.413: INFO: Number of nodes with available pods: 0
Feb  9 12:46:11.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:12.412: INFO: Number of nodes with available pods: 0
Feb  9 12:46:12.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:13.483: INFO: Number of nodes with available pods: 0
Feb  9 12:46:13.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:15.099: INFO: Number of nodes with available pods: 0
Feb  9 12:46:15.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:15.436: INFO: Number of nodes with available pods: 0
Feb  9 12:46:15.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:16.415: INFO: Number of nodes with available pods: 0
Feb  9 12:46:16.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:17.441: INFO: Number of nodes with available pods: 0
Feb  9 12:46:17.441: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:18.398: INFO: Number of nodes with available pods: 0
Feb  9 12:46:18.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:19.733: INFO: Number of nodes with available pods: 0
Feb  9 12:46:19.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:20.765: INFO: Number of nodes with available pods: 0
Feb  9 12:46:20.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:21.412: INFO: Number of nodes with available pods: 0
Feb  9 12:46:21.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:22.423: INFO: Number of nodes with available pods: 0
Feb  9 12:46:22.423: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 12:46:23.410: INFO: Number of nodes with available pods: 1
Feb  9 12:46:23.411: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bwtmp, will wait for the garbage collector to delete the pods
Feb  9 12:46:23.661: INFO: Deleting DaemonSet.extensions daemon-set took: 92.852979ms
Feb  9 12:46:23.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.417822ms
Feb  9 12:46:31.276: INFO: Number of nodes with available pods: 0
Feb  9 12:46:31.276: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 12:46:31.283: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bwtmp/daemonsets","resourceVersion":"21090858"},"items":null}

Feb  9 12:46:31.288: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bwtmp/pods","resourceVersion":"21090858"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:46:31.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bwtmp" for this suite.
Feb  9 12:46:39.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:46:39.429: INFO: namespace: e2e-tests-daemonsets-bwtmp, resource: bindings, ignored listing per whitelist
Feb  9 12:46:39.534: INFO: namespace e2e-tests-daemonsets-bwtmp deletion completed in 8.216156551s

• [SLOW TEST:46.705 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:46:39.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-3754659a-4b3a-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  9 12:46:39.725: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-b468x" to be "success or failure"
Feb  9 12:46:39.739: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.194546ms
Feb  9 12:46:41.753: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027803845s
Feb  9 12:46:43.771: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045701259s
Feb  9 12:46:45.868: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142757579s
Feb  9 12:46:47.906: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180840831s
Feb  9 12:46:49.923: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197336247s
STEP: Saw pod success
Feb  9 12:46:49.923: INFO: Pod "pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:46:49.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 12:46:50.207: INFO: Waiting for pod pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:46:50.217: INFO: Pod pod-projected-configmaps-375524b2-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:46:50.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b468x" for this suite.
Feb  9 12:46:56.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:46:56.626: INFO: namespace: e2e-tests-projected-b468x, resource: bindings, ignored listing per whitelist
Feb  9 12:46:56.685: INFO: namespace e2e-tests-projected-b468x deletion completed in 6.452991652s

• [SLOW TEST:17.150 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:46:56.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-419aa61c-4b3a-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  9 12:46:56.928: INFO: Waiting up to 5m0s for pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-tg2wl" to be "success or failure"
Feb  9 12:46:56.957: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.045456ms
Feb  9 12:46:59.077: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149401562s
Feb  9 12:47:01.090: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162229503s
Feb  9 12:47:04.044: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.116235959s
Feb  9 12:47:06.093: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.165593817s
Feb  9 12:47:08.110: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.18236344s
STEP: Saw pod success
Feb  9 12:47:08.110: INFO: Pod "pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:47:08.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  9 12:47:08.312: INFO: Waiting for pod pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:47:08.324: INFO: Pod pod-configmaps-419be55c-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:47:08.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tg2wl" for this suite.
Feb  9 12:47:16.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:47:16.420: INFO: namespace: e2e-tests-configmap-tg2wl, resource: bindings, ignored listing per whitelist
Feb  9 12:47:16.892: INFO: namespace e2e-tests-configmap-tg2wl deletion completed in 8.546358947s

• [SLOW TEST:20.207 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:47:16.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  9 12:47:17.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-tfsld" to be "success or failure"
Feb  9 12:47:17.358: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.020731ms
Feb  9 12:47:19.791: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.46174065s
Feb  9 12:47:21.811: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481601012s
Feb  9 12:47:24.052: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723447499s
Feb  9 12:47:26.071: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.742443599s
Feb  9 12:47:28.087: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.758287133s
STEP: Saw pod success
Feb  9 12:47:28.087: INFO: Pod "downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:47:28.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005 container client-container: 
STEP: delete the pod
Feb  9 12:47:28.411: INFO: Waiting for pod downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:47:28.470: INFO: Pod downwardapi-volume-4dc3eb7a-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:47:28.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tfsld" for this suite.
Feb  9 12:47:36.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:47:36.854: INFO: namespace: e2e-tests-downward-api-tfsld, resource: bindings, ignored listing per whitelist
Feb  9 12:47:36.854: INFO: namespace e2e-tests-downward-api-tfsld deletion completed in 8.364669613s

• [SLOW TEST:19.962 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:47:36.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb  9 12:47:37.042: INFO: Waiting up to 5m0s for pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-containers-vtp5q" to be "success or failure"
Feb  9 12:47:37.173: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 131.847934ms
Feb  9 12:47:39.187: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145647256s
Feb  9 12:47:41.205: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163268955s
Feb  9 12:47:43.354: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312465166s
Feb  9 12:47:45.386: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343887519s
Feb  9 12:47:47.407: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.365725374s
STEP: Saw pod success
Feb  9 12:47:47.407: INFO: Pod "client-containers-598450f2-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:47:47.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-598450f2-4b3a-11ea-aa78-0242ac110005 container test-container: 
STEP: delete the pod
Feb  9 12:47:47.494: INFO: Waiting for pod client-containers-598450f2-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:47:47.504: INFO: Pod client-containers-598450f2-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:47:47.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-vtp5q" for this suite.
Feb  9 12:47:53.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:47:53.687: INFO: namespace: e2e-tests-containers-vtp5q, resource: bindings, ignored listing per whitelist
Feb  9 12:47:53.757: INFO: namespace e2e-tests-containers-vtp5q deletion completed in 6.237463088s

• [SLOW TEST:16.902 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:47:53.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb  9 12:47:54.031: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  9 12:47:54.052: INFO: Waiting for terminating namespaces to be deleted...
Feb  9 12:47:54.061: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb  9 12:47:54.095: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb  9 12:47:54.096: INFO: 	Container weave ready: true, restart count 0
Feb  9 12:47:54.096: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 12:47:54.096: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  9 12:47:54.096: INFO: 	Container coredns ready: true, restart count 0
Feb  9 12:47:54.096: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  9 12:47:54.096: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  9 12:47:54.096: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb  9 12:47:54.096: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb  9 12:47:54.096: INFO: 	Container coredns ready: true, restart count 0
Feb  9 12:47:54.096: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb  9 12:47:54.096: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 12:47:54.096: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-69d63844-4b3a-11ea-aa78-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-69d63844-4b3a-11ea-aa78-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-69d63844-4b3a-11ea-aa78-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:48:16.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-rh4zd" for this suite.
Feb  9 12:48:32.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:48:32.749: INFO: namespace: e2e-tests-sched-pred-rh4zd, resource: bindings, ignored listing per whitelist
Feb  9 12:48:32.895: INFO: namespace e2e-tests-sched-pred-rh4zd deletion completed in 16.254287215s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:39.138 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:48:32.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 12:48:33.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-p5wgq'
Feb  9 12:48:33.334: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 12:48:33.334: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  9 12:48:35.517: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-c6phd]
Feb  9 12:48:35.517: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-c6phd" in namespace "e2e-tests-kubectl-p5wgq" to be "running and ready"
Feb  9 12:48:35.571: INFO: Pod "e2e-test-nginx-rc-c6phd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.308867ms
Feb  9 12:48:37.648: INFO: Pod "e2e-test-nginx-rc-c6phd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131218031s
Feb  9 12:48:39.975: INFO: Pod "e2e-test-nginx-rc-c6phd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457413742s
Feb  9 12:48:41.991: INFO: Pod "e2e-test-nginx-rc-c6phd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473418344s
Feb  9 12:48:44.010: INFO: Pod "e2e-test-nginx-rc-c6phd": Phase="Running", Reason="", readiness=true. Elapsed: 8.492488568s
Feb  9 12:48:44.010: INFO: Pod "e2e-test-nginx-rc-c6phd" satisfied condition "running and ready"
Feb  9 12:48:44.010: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-c6phd]
Feb  9 12:48:44.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-p5wgq'
Feb  9 12:48:44.216: INFO: stderr: ""
Feb  9 12:48:44.216: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb  9 12:48:44.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-p5wgq'
Feb  9 12:48:44.406: INFO: stderr: ""
Feb  9 12:48:44.407: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:48:44.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p5wgq" for this suite.
Feb  9 12:49:08.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:49:08.919: INFO: namespace: e2e-tests-kubectl-p5wgq, resource: bindings, ignored listing per whitelist
Feb  9 12:49:09.006: INFO: namespace e2e-tests-kubectl-p5wgq deletion completed in 24.585773349s

• [SLOW TEST:36.108 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:49:09.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  9 12:49:09.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-h2mpq" to be "success or failure"
Feb  9 12:49:09.199: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.93789ms
Feb  9 12:49:11.280: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090305839s
Feb  9 12:49:13.300: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110351353s
Feb  9 12:49:15.581: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391757693s
Feb  9 12:49:17.615: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.425773647s
Feb  9 12:49:19.655: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.46572882s
Feb  9 12:49:21.671: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481163943s
STEP: Saw pod success
Feb  9 12:49:21.671: INFO: Pod "downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:49:21.684: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005 container client-container: 
STEP: delete the pod
Feb  9 12:49:22.740: INFO: Waiting for pod downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:49:22.751: INFO: Pod downwardapi-volume-90711f07-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:49:22.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h2mpq" for this suite.
Feb  9 12:49:28.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:49:28.947: INFO: namespace: e2e-tests-projected-h2mpq, resource: bindings, ignored listing per whitelist
Feb  9 12:49:29.088: INFO: namespace e2e-tests-projected-h2mpq deletion completed in 6.327189179s

• [SLOW TEST:20.080 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:49:29.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-9c69a80e-4b3a-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  9 12:49:29.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-cjtts" to be "success or failure"
Feb  9 12:49:29.335: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439331ms
Feb  9 12:49:31.907: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577811053s
Feb  9 12:49:33.931: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601690494s
Feb  9 12:49:36.037: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708043113s
Feb  9 12:49:38.046: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716783884s
Feb  9 12:49:40.062: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733360727s
STEP: Saw pod success
Feb  9 12:49:40.062: INFO: Pod "pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:49:40.068: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  9 12:49:40.304: INFO: Waiting for pod pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:49:40.314: INFO: Pod pod-configmaps-9c723daa-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:49:40.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cjtts" for this suite.
Feb  9 12:49:46.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:49:46.596: INFO: namespace: e2e-tests-configmap-cjtts, resource: bindings, ignored listing per whitelist
Feb  9 12:49:46.669: INFO: namespace e2e-tests-configmap-cjtts deletion completed in 6.340558915s

• [SLOW TEST:17.581 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:49:46.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a6e5c0c7-4b3a-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  9 12:49:46.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-k5lms" to be "success or failure"
Feb  9 12:49:46.924: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.868948ms
Feb  9 12:49:49.095: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202668377s
Feb  9 12:49:51.110: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218078911s
Feb  9 12:49:53.161: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269634304s
Feb  9 12:49:55.181: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288986815s
Feb  9 12:49:57.198: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306495274s
Feb  9 12:49:59.216: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.323799098s
STEP: Saw pod success
Feb  9 12:49:59.216: INFO: Pod "pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:49:59.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 12:49:59.304: INFO: Waiting for pod pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:49:59.315: INFO: Pod pod-projected-secrets-a6e7bd4b-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:49:59.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k5lms" for this suite.
Feb  9 12:50:05.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:50:05.618: INFO: namespace: e2e-tests-projected-k5lms, resource: bindings, ignored listing per whitelist
Feb  9 12:50:05.671: INFO: namespace e2e-tests-projected-k5lms deletion completed in 6.336819583s

• [SLOW TEST:19.003 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:50:05.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  9 12:50:05.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-ld65x" to be "success or failure"
Feb  9 12:50:05.970: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.297844ms
Feb  9 12:50:07.996: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042973375s
Feb  9 12:50:10.013: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059769979s
Feb  9 12:50:12.773: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819457762s
Feb  9 12:50:14.793: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.839935234s
Feb  9 12:50:16.865: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.911687343s
STEP: Saw pod success
Feb  9 12:50:16.865: INFO: Pod "downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:50:16.909: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005 container client-container: 
STEP: delete the pod
Feb  9 12:50:17.042: INFO: Waiting for pod downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:50:17.055: INFO: Pod downwardapi-volume-b246c04b-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:50:17.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ld65x" for this suite.
Feb  9 12:50:23.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:50:23.304: INFO: namespace: e2e-tests-projected-ld65x, resource: bindings, ignored listing per whitelist
Feb  9 12:50:23.396: INFO: namespace e2e-tests-projected-ld65x deletion completed in 6.330624396s

• [SLOW TEST:17.724 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:50:23.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gg7c4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-gg7c4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 12:50:39.754: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.763: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.771: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.786: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.796: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.803: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.826: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.839: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.846: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.852: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.860: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.866: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.870: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.876: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.881: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.885: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.889: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.895: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.899: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.903: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005: the server could not find the requested resource (get pods dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005)
Feb  9 12:50:39.903: INFO: Lookups using e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-gg7c4.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  9 12:50:45.035: INFO: DNS probes using e2e-tests-dns-gg7c4/dns-test-bcd48657-4b3a-11ea-aa78-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:50:45.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-gg7c4" for this suite.
Feb  9 12:50:53.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:50:53.634: INFO: namespace: e2e-tests-dns-gg7c4, resource: bindings, ignored listing per whitelist
Feb  9 12:50:53.638: INFO: namespace e2e-tests-dns-gg7c4 deletion completed in 8.314820929s

• [SLOW TEST:30.241 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:50:53.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:51:06.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-p9wj7" for this suite.
Feb  9 12:51:14.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:51:14.844: INFO: namespace: e2e-tests-emptydir-wrapper-p9wj7, resource: bindings, ignored listing per whitelist
Feb  9 12:51:14.923: INFO: namespace e2e-tests-emptydir-wrapper-p9wj7 deletion completed in 8.218467574s

• [SLOW TEST:21.285 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:51:14.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4cz6m/secret-test-db95d40a-4b3a-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  9 12:51:15.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-4cz6m" to be "success or failure"
Feb  9 12:51:15.387: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.928783ms
Feb  9 12:51:17.718: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433873055s
Feb  9 12:51:19.740: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455962055s
Feb  9 12:51:21.865: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580893492s
Feb  9 12:51:23.882: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.597470855s
Feb  9 12:51:25.952: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667833315s
Feb  9 12:51:28.123: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.838457314s
STEP: Saw pod success
Feb  9 12:51:28.123: INFO: Pod "pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:51:28.166: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005 container env-test: 
STEP: delete the pod
Feb  9 12:51:28.296: INFO: Waiting for pod pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:51:28.314: INFO: Pod pod-configmaps-db96d84c-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:51:28.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4cz6m" for this suite.
Feb  9 12:51:34.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:51:34.517: INFO: namespace: e2e-tests-secrets-4cz6m, resource: bindings, ignored listing per whitelist
Feb  9 12:51:34.654: INFO: namespace e2e-tests-secrets-4cz6m deletion completed in 6.320986693s

• [SLOW TEST:19.730 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:51:34.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  9 12:51:34.877: INFO: Waiting up to 5m0s for pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-sbnxl" to be "success or failure"
Feb  9 12:51:34.951: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.662853ms
Feb  9 12:51:36.963: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086443775s
Feb  9 12:51:38.998: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121757339s
Feb  9 12:51:41.765: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.888311347s
Feb  9 12:51:43.784: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.907177686s
Feb  9 12:51:45.813: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.936194906s
STEP: Saw pod success
Feb  9 12:51:45.813: INFO: Pod "pod-e74825a2-4b3a-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:51:45.850: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e74825a2-4b3a-11ea-aa78-0242ac110005 container test-container: 
STEP: delete the pod
Feb  9 12:51:46.176: INFO: Waiting for pod pod-e74825a2-4b3a-11ea-aa78-0242ac110005 to disappear
Feb  9 12:51:46.199: INFO: Pod pod-e74825a2-4b3a-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:51:46.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sbnxl" for this suite.
Feb  9 12:51:52.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:51:52.304: INFO: namespace: e2e-tests-emptydir-sbnxl, resource: bindings, ignored listing per whitelist
Feb  9 12:51:52.469: INFO: namespace e2e-tests-emptydir-sbnxl deletion completed in 6.261511645s

• [SLOW TEST:17.814 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:51:52.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-f23a4267-4b3a-11ea-aa78-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-f23a4267-4b3a-11ea-aa78-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:53:12.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bksf2" for this suite.
Feb  9 12:53:36.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:53:36.369: INFO: namespace: e2e-tests-projected-bksf2, resource: bindings, ignored listing per whitelist
Feb  9 12:53:36.422: INFO: namespace e2e-tests-projected-bksf2 deletion completed in 24.221702702s

• [SLOW TEST:103.952 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:53:36.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb  9 12:53:36.744: INFO: Waiting up to 5m0s for pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005" in namespace "e2e-tests-var-expansion-d449x" to be "success or failure"
Feb  9 12:53:36.759: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.274958ms
Feb  9 12:53:38.983: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23940913s
Feb  9 12:53:41.017: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273344407s
Feb  9 12:53:43.055: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310894962s
Feb  9 12:53:45.317: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573402055s
Feb  9 12:53:47.331: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586788441s
Feb  9 12:53:49.345: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.601244846s
STEP: Saw pod success
Feb  9 12:53:49.345: INFO: Pod "var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:53:49.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  9 12:53:49.931: INFO: Waiting for pod var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005 to disappear
Feb  9 12:53:50.296: INFO: Pod var-expansion-2fe8ee88-4b3b-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:53:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-d449x" for this suite.
Feb  9 12:53:56.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:53:56.418: INFO: namespace: e2e-tests-var-expansion-d449x, resource: bindings, ignored listing per whitelist
Feb  9 12:53:56.682: INFO: namespace e2e-tests-var-expansion-d449x deletion completed in 6.349095394s

• [SLOW TEST:20.261 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:53:56.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  9 12:53:56.936: INFO: Waiting up to 5m0s for pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-z4j2c" to be "success or failure"
Feb  9 12:53:56.952: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.282938ms
Feb  9 12:53:58.973: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03654543s
Feb  9 12:54:00.993: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057068962s
Feb  9 12:54:03.007: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070357325s
Feb  9 12:54:06.328: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.391337406s
Feb  9 12:54:08.351: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.414599214s
STEP: Saw pod success
Feb  9 12:54:08.351: INFO: Pod "pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:54:08.369: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005 container test-container: 
STEP: delete the pod
Feb  9 12:54:09.052: INFO: Waiting for pod pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005 to disappear
Feb  9 12:54:09.105: INFO: Pod pod-3bf38a2c-4b3b-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:54:09.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z4j2c" for this suite.
Feb  9 12:54:17.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:54:17.399: INFO: namespace: e2e-tests-emptydir-z4j2c, resource: bindings, ignored listing per whitelist
Feb  9 12:54:17.445: INFO: namespace e2e-tests-emptydir-z4j2c deletion completed in 8.299140392s

• [SLOW TEST:20.761 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:54:17.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  9 12:54:43.570: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:43.632: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:45.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:45.652: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:47.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:47.670: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:49.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:49.756: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:51.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:51.820: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:53.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:53.681: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:55.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:55.697: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:57.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:54:57.713: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:54:59.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:01.326: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:01.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:01.718: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:03.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:04.174: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:05.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:05.743: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:07.635: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:07.655: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:09.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:09.665: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:11.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:11.646: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  9 12:55:13.633: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  9 12:55:13.652: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:55:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zqlqs" for this suite.
Feb  9 12:55:37.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:55:37.976: INFO: namespace: e2e-tests-container-lifecycle-hook-zqlqs, resource: bindings, ignored listing per whitelist
Feb  9 12:55:37.976: INFO: namespace e2e-tests-container-lifecycle-hook-zqlqs deletion completed in 24.247806843s

• [SLOW TEST:80.528 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:55:37.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-8sbb
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 12:55:38.326: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8sbb" in namespace "e2e-tests-subpath-zkhqs" to be "success or failure"
Feb  9 12:55:38.346: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.140392ms
Feb  9 12:55:40.370: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043079766s
Feb  9 12:55:42.393: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066628314s
Feb  9 12:55:44.418: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091889653s
Feb  9 12:55:48.286: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.959873236s
Feb  9 12:55:50.339: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.012397351s
Feb  9 12:55:52.357: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.030165364s
Feb  9 12:55:54.372: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.044974785s
Feb  9 12:55:56.398: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.071421104s
Feb  9 12:55:58.416: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 20.088977053s
Feb  9 12:56:00.439: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 22.112065212s
Feb  9 12:56:02.463: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 24.136049324s
Feb  9 12:56:04.495: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 26.168008864s
Feb  9 12:56:06.516: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 28.18925025s
Feb  9 12:56:08.563: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 30.236912622s
Feb  9 12:56:10.582: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 32.255800641s
Feb  9 12:56:12.611: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 34.284035247s
Feb  9 12:56:14.643: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 36.316338798s
Feb  9 12:56:16.681: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Running", Reason="", readiness=false. Elapsed: 38.35429461s
Feb  9 12:56:18.699: INFO: Pod "pod-subpath-test-configmap-8sbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.372465251s
STEP: Saw pod success
Feb  9 12:56:18.699: INFO: Pod "pod-subpath-test-configmap-8sbb" satisfied condition "success or failure"
Feb  9 12:56:18.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-8sbb container test-container-subpath-configmap-8sbb: 
STEP: delete the pod
Feb  9 12:56:20.617: INFO: Waiting for pod pod-subpath-test-configmap-8sbb to disappear
Feb  9 12:56:21.017: INFO: Pod pod-subpath-test-configmap-8sbb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8sbb
Feb  9 12:56:21.017: INFO: Deleting pod "pod-subpath-test-configmap-8sbb" in namespace "e2e-tests-subpath-zkhqs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:56:21.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zkhqs" for this suite.
Feb  9 12:56:27.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:56:27.438: INFO: namespace: e2e-tests-subpath-zkhqs, resource: bindings, ignored listing per whitelist
Feb  9 12:56:27.560: INFO: namespace e2e-tests-subpath-zkhqs deletion completed in 6.477607128s

• [SLOW TEST:49.584 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:56:27.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:57:31.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-zzcxp" for this suite.
Feb  9 12:57:39.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:57:39.907: INFO: namespace: e2e-tests-container-runtime-zzcxp, resource: bindings, ignored listing per whitelist
Feb  9 12:57:39.962: INFO: namespace e2e-tests-container-runtime-zzcxp deletion completed in 8.268709823s

• [SLOW TEST:72.399 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:57:39.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb  9 12:57:40.152: INFO: Waiting up to 5m0s for pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005" in namespace "e2e-tests-containers-4m5xn" to be "success or failure"
Feb  9 12:57:40.158: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.620547ms
Feb  9 12:57:42.177: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024278741s
Feb  9 12:57:44.222: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069537074s
Feb  9 12:57:46.325: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17286801s
Feb  9 12:57:49.252: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.099317428s
Feb  9 12:57:51.265: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.112367602s
STEP: Saw pod success
Feb  9 12:57:51.265: INFO: Pod "client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:57:51.268: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005 container test-container: 
STEP: delete the pod
Feb  9 12:57:51.321: INFO: Waiting for pod client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005 to disappear
Feb  9 12:57:51.330: INFO: Pod client-containers-c10004c8-4b3b-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:57:51.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4m5xn" for this suite.
Feb  9 12:57:57.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:57:57.866: INFO: namespace: e2e-tests-containers-4m5xn, resource: bindings, ignored listing per whitelist
Feb  9 12:57:57.886: INFO: namespace e2e-tests-containers-4m5xn deletion completed in 6.548794788s

• [SLOW TEST:17.924 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:57:57.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  9 12:57:58.128: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.492773ms)
Feb  9 12:57:58.169: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.352216ms)
Feb  9 12:57:58.179: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.196955ms)
Feb  9 12:57:58.186: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.50905ms)
Feb  9 12:57:58.192: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.909093ms)
Feb  9 12:57:58.198: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.312388ms)
Feb  9 12:57:58.203: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.161306ms)
Feb  9 12:57:58.208: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.277016ms)
Feb  9 12:57:58.213: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.76644ms)
Feb  9 12:57:58.218: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.248829ms)
Feb  9 12:57:58.223: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.877423ms)
Feb  9 12:57:58.229: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.951616ms)
Feb  9 12:57:58.237: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.6132ms)
Feb  9 12:57:58.244: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.902572ms)
Feb  9 12:57:58.249: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.224891ms)
Feb  9 12:57:58.255: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.624979ms)
Feb  9 12:57:58.260: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.903979ms)
Feb  9 12:57:58.267: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.432152ms)
Feb  9 12:57:58.275: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.541862ms)
Feb  9 12:57:58.281: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.994821ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:57:58.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-dg2md" for this suite.
Feb  9 12:58:04.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:58:04.443: INFO: namespace: e2e-tests-proxy-dg2md, resource: bindings, ignored listing per whitelist
Feb  9 12:58:04.786: INFO: namespace e2e-tests-proxy-dg2md deletion completed in 6.498958599s

• [SLOW TEST:6.899 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:58:04.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  9 12:58:05.011: INFO: Waiting up to 5m0s for pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005" in namespace "e2e-tests-emptydir-86kbx" to be "success or failure"
Feb  9 12:58:05.172: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 160.671109ms
Feb  9 12:58:07.188: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176558001s
Feb  9 12:58:09.210: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198523345s
Feb  9 12:58:12.015: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.004192585s
Feb  9 12:58:14.153: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.142227258s
Feb  9 12:58:16.180: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.168482791s
Feb  9 12:58:18.224: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.212867136s
STEP: Saw pod success
Feb  9 12:58:18.224: INFO: Pod "pod-cfd031c8-4b3b-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 12:58:18.258: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cfd031c8-4b3b-11ea-aa78-0242ac110005 container test-container: 
STEP: delete the pod
Feb  9 12:58:18.503: INFO: Waiting for pod pod-cfd031c8-4b3b-11ea-aa78-0242ac110005 to disappear
Feb  9 12:58:18.807: INFO: Pod pod-cfd031c8-4b3b-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:58:18.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-86kbx" for this suite.
Feb  9 12:58:26.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:58:27.191: INFO: namespace: e2e-tests-emptydir-86kbx, resource: bindings, ignored listing per whitelist
Feb  9 12:58:27.191: INFO: namespace e2e-tests-emptydir-86kbx deletion completed in 8.334065636s

• [SLOW TEST:22.405 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:58:27.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-dd249677-4b3b-11ea-aa78-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-dd249677-4b3b-11ea-aa78-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:58:39.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-q8vh4" for this suite.
Feb  9 12:59:06.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 12:59:06.168: INFO: namespace: e2e-tests-configmap-q8vh4, resource: bindings, ignored listing per whitelist
Feb  9 12:59:06.253: INFO: namespace e2e-tests-configmap-q8vh4 deletion completed in 26.39194818s

• [SLOW TEST:39.061 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 12:59:06.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  9 12:59:37.369: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:37.369: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:37.499620       8 log.go:172] (0xc0018064d0) (0xc0014cd360) Create stream
I0209 12:59:37.500048       8 log.go:172] (0xc0018064d0) (0xc0014cd360) Stream added, broadcasting: 1
I0209 12:59:37.527893       8 log.go:172] (0xc0018064d0) Reply frame received for 1
I0209 12:59:37.528127       8 log.go:172] (0xc0018064d0) (0xc0014cd400) Create stream
I0209 12:59:37.528162       8 log.go:172] (0xc0018064d0) (0xc0014cd400) Stream added, broadcasting: 3
I0209 12:59:37.530570       8 log.go:172] (0xc0018064d0) Reply frame received for 3
I0209 12:59:37.530659       8 log.go:172] (0xc0018064d0) (0xc00074d400) Create stream
I0209 12:59:37.530683       8 log.go:172] (0xc0018064d0) (0xc00074d400) Stream added, broadcasting: 5
I0209 12:59:37.532349       8 log.go:172] (0xc0018064d0) Reply frame received for 5
I0209 12:59:37.983504       8 log.go:172] (0xc0018064d0) Data frame received for 3
I0209 12:59:37.983596       8 log.go:172] (0xc0014cd400) (3) Data frame handling
I0209 12:59:37.983635       8 log.go:172] (0xc0014cd400) (3) Data frame sent
I0209 12:59:38.133687       8 log.go:172] (0xc0018064d0) Data frame received for 1
I0209 12:59:38.133921       8 log.go:172] (0xc0014cd360) (1) Data frame handling
I0209 12:59:38.133977       8 log.go:172] (0xc0014cd360) (1) Data frame sent
I0209 12:59:38.134073       8 log.go:172] (0xc0018064d0) (0xc00074d400) Stream removed, broadcasting: 5
I0209 12:59:38.134269       8 log.go:172] (0xc0018064d0) (0xc0014cd400) Stream removed, broadcasting: 3
I0209 12:59:38.134389       8 log.go:172] (0xc0018064d0) (0xc0014cd360) Stream removed, broadcasting: 1
I0209 12:59:38.134430       8 log.go:172] (0xc0018064d0) Go away received
I0209 12:59:38.135132       8 log.go:172] (0xc0018064d0) (0xc0014cd360) Stream removed, broadcasting: 1
I0209 12:59:38.135153       8 log.go:172] (0xc0018064d0) (0xc0014cd400) Stream removed, broadcasting: 3
I0209 12:59:38.135173       8 log.go:172] (0xc0018064d0) (0xc00074d400) Stream removed, broadcasting: 5
Feb  9 12:59:38.135: INFO: Exec stderr: ""
Feb  9 12:59:38.135: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:38.136: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:38.223875       8 log.go:172] (0xc001806b00) (0xc0014cd7c0) Create stream
I0209 12:59:38.224091       8 log.go:172] (0xc001806b00) (0xc0014cd7c0) Stream added, broadcasting: 1
I0209 12:59:38.230581       8 log.go:172] (0xc001806b00) Reply frame received for 1
I0209 12:59:38.230637       8 log.go:172] (0xc001806b00) (0xc000b31900) Create stream
I0209 12:59:38.230653       8 log.go:172] (0xc001806b00) (0xc000b31900) Stream added, broadcasting: 3
I0209 12:59:38.231730       8 log.go:172] (0xc001806b00) Reply frame received for 3
I0209 12:59:38.231773       8 log.go:172] (0xc001806b00) (0xc00074d4a0) Create stream
I0209 12:59:38.231785       8 log.go:172] (0xc001806b00) (0xc00074d4a0) Stream added, broadcasting: 5
I0209 12:59:38.232673       8 log.go:172] (0xc001806b00) Reply frame received for 5
I0209 12:59:38.368466       8 log.go:172] (0xc001806b00) Data frame received for 3
I0209 12:59:38.368610       8 log.go:172] (0xc000b31900) (3) Data frame handling
I0209 12:59:38.368641       8 log.go:172] (0xc000b31900) (3) Data frame sent
I0209 12:59:38.539000       8 log.go:172] (0xc001806b00) Data frame received for 1
I0209 12:59:38.539167       8 log.go:172] (0xc001806b00) (0xc000b31900) Stream removed, broadcasting: 3
I0209 12:59:38.539364       8 log.go:172] (0xc0014cd7c0) (1) Data frame handling
I0209 12:59:38.539423       8 log.go:172] (0xc0014cd7c0) (1) Data frame sent
I0209 12:59:38.539634       8 log.go:172] (0xc001806b00) (0xc00074d4a0) Stream removed, broadcasting: 5
I0209 12:59:38.540137       8 log.go:172] (0xc001806b00) (0xc0014cd7c0) Stream removed, broadcasting: 1
I0209 12:59:38.540232       8 log.go:172] (0xc001806b00) Go away received
I0209 12:59:38.540687       8 log.go:172] (0xc001806b00) (0xc0014cd7c0) Stream removed, broadcasting: 1
I0209 12:59:38.540734       8 log.go:172] (0xc001806b00) (0xc000b31900) Stream removed, broadcasting: 3
I0209 12:59:38.540764       8 log.go:172] (0xc001806b00) (0xc00074d4a0) Stream removed, broadcasting: 5
Feb  9 12:59:38.540: INFO: Exec stderr: ""
Feb  9 12:59:38.541: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:38.541: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:38.688178       8 log.go:172] (0xc0009fe000) (0xc000b31e00) Create stream
I0209 12:59:38.688387       8 log.go:172] (0xc0009fe000) (0xc000b31e00) Stream added, broadcasting: 1
I0209 12:59:38.700418       8 log.go:172] (0xc0009fe000) Reply frame received for 1
I0209 12:59:38.700486       8 log.go:172] (0xc0009fe000) (0xc00074d540) Create stream
I0209 12:59:38.700503       8 log.go:172] (0xc0009fe000) (0xc00074d540) Stream added, broadcasting: 3
I0209 12:59:38.701604       8 log.go:172] (0xc0009fe000) Reply frame received for 3
I0209 12:59:38.701641       8 log.go:172] (0xc0009fe000) (0xc0014cd860) Create stream
I0209 12:59:38.701656       8 log.go:172] (0xc0009fe000) (0xc0014cd860) Stream added, broadcasting: 5
I0209 12:59:38.710136       8 log.go:172] (0xc0009fe000) Reply frame received for 5
I0209 12:59:38.910701       8 log.go:172] (0xc0009fe000) Data frame received for 3
I0209 12:59:38.910839       8 log.go:172] (0xc00074d540) (3) Data frame handling
I0209 12:59:38.910862       8 log.go:172] (0xc00074d540) (3) Data frame sent
I0209 12:59:39.060198       8 log.go:172] (0xc0009fe000) Data frame received for 1
I0209 12:59:39.060362       8 log.go:172] (0xc000b31e00) (1) Data frame handling
I0209 12:59:39.060418       8 log.go:172] (0xc000b31e00) (1) Data frame sent
I0209 12:59:39.061486       8 log.go:172] (0xc0009fe000) (0xc000b31e00) Stream removed, broadcasting: 1
I0209 12:59:39.062053       8 log.go:172] (0xc0009fe000) (0xc00074d540) Stream removed, broadcasting: 3
I0209 12:59:39.062839       8 log.go:172] (0xc0009fe000) (0xc0014cd860) Stream removed, broadcasting: 5
I0209 12:59:39.062914       8 log.go:172] (0xc0009fe000) (0xc000b31e00) Stream removed, broadcasting: 1
I0209 12:59:39.062944       8 log.go:172] (0xc0009fe000) (0xc00074d540) Stream removed, broadcasting: 3
I0209 12:59:39.063007       8 log.go:172] (0xc0009fe000) (0xc0014cd860) Stream removed, broadcasting: 5
Feb  9 12:59:39.063: INFO: Exec stderr: ""
Feb  9 12:59:39.063: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:39.063: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:39.209240       8 log.go:172] (0xc0009fe4d0) (0xc000cd6280) Create stream
I0209 12:59:39.209527       8 log.go:172] (0xc0009fe4d0) (0xc000cd6280) Stream added, broadcasting: 1
I0209 12:59:39.220651       8 log.go:172] (0xc0009fe4d0) Reply frame received for 1
I0209 12:59:39.220838       8 log.go:172] (0xc0009fe4d0) (0xc000648460) Create stream
I0209 12:59:39.220860       8 log.go:172] (0xc0009fe4d0) (0xc000648460) Stream added, broadcasting: 3
I0209 12:59:39.222053       8 log.go:172] (0xc0009fe4d0) Reply frame received for 3
I0209 12:59:39.222102       8 log.go:172] (0xc0009fe4d0) (0xc000cd6320) Create stream
I0209 12:59:39.222132       8 log.go:172] (0xc0009fe4d0) (0xc000cd6320) Stream added, broadcasting: 5
I0209 12:59:39.224485       8 log.go:172] (0xc0009fe4d0) Reply frame received for 5
I0209 12:59:39.349595       8 log.go:172] (0xc0009fe4d0) Data frame received for 3
I0209 12:59:39.349686       8 log.go:172] (0xc000648460) (3) Data frame handling
I0209 12:59:39.349722       8 log.go:172] (0xc000648460) (3) Data frame sent
I0209 12:59:39.454938       8 log.go:172] (0xc0009fe4d0) (0xc000cd6320) Stream removed, broadcasting: 5
I0209 12:59:39.455075       8 log.go:172] (0xc0009fe4d0) Data frame received for 1
I0209 12:59:39.455105       8 log.go:172] (0xc000cd6280) (1) Data frame handling
I0209 12:59:39.455143       8 log.go:172] (0xc000cd6280) (1) Data frame sent
I0209 12:59:39.455167       8 log.go:172] (0xc0009fe4d0) (0xc000cd6280) Stream removed, broadcasting: 1
I0209 12:59:39.455234       8 log.go:172] (0xc0009fe4d0) (0xc000648460) Stream removed, broadcasting: 3
I0209 12:59:39.455273       8 log.go:172] (0xc0009fe4d0) Go away received
I0209 12:59:39.455569       8 log.go:172] (0xc0009fe4d0) (0xc000cd6280) Stream removed, broadcasting: 1
I0209 12:59:39.455592       8 log.go:172] (0xc0009fe4d0) (0xc000648460) Stream removed, broadcasting: 3
I0209 12:59:39.455598       8 log.go:172] (0xc0009fe4d0) (0xc000cd6320) Stream removed, broadcasting: 5
Feb  9 12:59:39.455: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  9 12:59:39.455: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:39.455: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:39.513000       8 log.go:172] (0xc0009fe9a0) (0xc000cd6780) Create stream
I0209 12:59:39.513219       8 log.go:172] (0xc0009fe9a0) (0xc000cd6780) Stream added, broadcasting: 1
I0209 12:59:39.518201       8 log.go:172] (0xc0009fe9a0) Reply frame received for 1
I0209 12:59:39.518259       8 log.go:172] (0xc0009fe9a0) (0xc001f25ea0) Create stream
I0209 12:59:39.518271       8 log.go:172] (0xc0009fe9a0) (0xc001f25ea0) Stream added, broadcasting: 3
I0209 12:59:39.519156       8 log.go:172] (0xc0009fe9a0) Reply frame received for 3
I0209 12:59:39.519173       8 log.go:172] (0xc0009fe9a0) (0xc000cd6820) Create stream
I0209 12:59:39.519180       8 log.go:172] (0xc0009fe9a0) (0xc000cd6820) Stream added, broadcasting: 5
I0209 12:59:39.520211       8 log.go:172] (0xc0009fe9a0) Reply frame received for 5
I0209 12:59:39.604689       8 log.go:172] (0xc0009fe9a0) Data frame received for 3
I0209 12:59:39.604808       8 log.go:172] (0xc001f25ea0) (3) Data frame handling
I0209 12:59:39.604838       8 log.go:172] (0xc001f25ea0) (3) Data frame sent
I0209 12:59:39.744814       8 log.go:172] (0xc0009fe9a0) (0xc001f25ea0) Stream removed, broadcasting: 3
I0209 12:59:39.745075       8 log.go:172] (0xc0009fe9a0) Data frame received for 1
I0209 12:59:39.745116       8 log.go:172] (0xc000cd6780) (1) Data frame handling
I0209 12:59:39.745218       8 log.go:172] (0xc000cd6780) (1) Data frame sent
I0209 12:59:39.745259       8 log.go:172] (0xc0009fe9a0) (0xc000cd6780) Stream removed, broadcasting: 1
I0209 12:59:39.745301       8 log.go:172] (0xc0009fe9a0) (0xc000cd6820) Stream removed, broadcasting: 5
I0209 12:59:39.745394       8 log.go:172] (0xc0009fe9a0) Go away received
I0209 12:59:39.745715       8 log.go:172] (0xc0009fe9a0) (0xc000cd6780) Stream removed, broadcasting: 1
I0209 12:59:39.745743       8 log.go:172] (0xc0009fe9a0) (0xc001f25ea0) Stream removed, broadcasting: 3
I0209 12:59:39.745764       8 log.go:172] (0xc0009fe9a0) (0xc000cd6820) Stream removed, broadcasting: 5
Feb  9 12:59:39.745: INFO: Exec stderr: ""
Feb  9 12:59:39.745: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:39.746: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:39.834024       8 log.go:172] (0xc0000eb6b0) (0xc00074d7c0) Create stream
I0209 12:59:39.834154       8 log.go:172] (0xc0000eb6b0) (0xc00074d7c0) Stream added, broadcasting: 1
I0209 12:59:39.838992       8 log.go:172] (0xc0000eb6b0) Reply frame received for 1
I0209 12:59:39.839025       8 log.go:172] (0xc0000eb6b0) (0xc0006485a0) Create stream
I0209 12:59:39.839032       8 log.go:172] (0xc0000eb6b0) (0xc0006485a0) Stream added, broadcasting: 3
I0209 12:59:39.840929       8 log.go:172] (0xc0000eb6b0) Reply frame received for 3
I0209 12:59:39.840954       8 log.go:172] (0xc0000eb6b0) (0xc0014cd900) Create stream
I0209 12:59:39.840963       8 log.go:172] (0xc0000eb6b0) (0xc0014cd900) Stream added, broadcasting: 5
I0209 12:59:39.841667       8 log.go:172] (0xc0000eb6b0) Reply frame received for 5
I0209 12:59:39.995293       8 log.go:172] (0xc0000eb6b0) Data frame received for 3
I0209 12:59:39.995417       8 log.go:172] (0xc0006485a0) (3) Data frame handling
I0209 12:59:39.995440       8 log.go:172] (0xc0006485a0) (3) Data frame sent
I0209 12:59:40.116913       8 log.go:172] (0xc0000eb6b0) (0xc0006485a0) Stream removed, broadcasting: 3
I0209 12:59:40.117039       8 log.go:172] (0xc0000eb6b0) Data frame received for 1
I0209 12:59:40.117069       8 log.go:172] (0xc00074d7c0) (1) Data frame handling
I0209 12:59:40.117090       8 log.go:172] (0xc00074d7c0) (1) Data frame sent
I0209 12:59:40.117113       8 log.go:172] (0xc0000eb6b0) (0xc0014cd900) Stream removed, broadcasting: 5
I0209 12:59:40.117226       8 log.go:172] (0xc0000eb6b0) (0xc00074d7c0) Stream removed, broadcasting: 1
I0209 12:59:40.117279       8 log.go:172] (0xc0000eb6b0) Go away received
I0209 12:59:40.117635       8 log.go:172] (0xc0000eb6b0) (0xc00074d7c0) Stream removed, broadcasting: 1
I0209 12:59:40.117674       8 log.go:172] (0xc0000eb6b0) (0xc0006485a0) Stream removed, broadcasting: 3
I0209 12:59:40.117692       8 log.go:172] (0xc0000eb6b0) (0xc0014cd900) Stream removed, broadcasting: 5
Feb  9 12:59:40.117: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  9 12:59:40.117: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:40.117: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:40.181604       8 log.go:172] (0xc0016822c0) (0xc000648c80) Create stream
I0209 12:59:40.181747       8 log.go:172] (0xc0016822c0) (0xc000648c80) Stream added, broadcasting: 1
I0209 12:59:40.190215       8 log.go:172] (0xc0016822c0) Reply frame received for 1
I0209 12:59:40.190312       8 log.go:172] (0xc0016822c0) (0xc001f25f40) Create stream
I0209 12:59:40.190325       8 log.go:172] (0xc0016822c0) (0xc001f25f40) Stream added, broadcasting: 3
I0209 12:59:40.191837       8 log.go:172] (0xc0016822c0) Reply frame received for 3
I0209 12:59:40.191939       8 log.go:172] (0xc0016822c0) (0xc0003860a0) Create stream
I0209 12:59:40.191955       8 log.go:172] (0xc0016822c0) (0xc0003860a0) Stream added, broadcasting: 5
I0209 12:59:40.195158       8 log.go:172] (0xc0016822c0) Reply frame received for 5
I0209 12:59:40.292029       8 log.go:172] (0xc0016822c0) Data frame received for 3
I0209 12:59:40.292112       8 log.go:172] (0xc001f25f40) (3) Data frame handling
I0209 12:59:40.292150       8 log.go:172] (0xc001f25f40) (3) Data frame sent
I0209 12:59:40.402030       8 log.go:172] (0xc0016822c0) (0xc001f25f40) Stream removed, broadcasting: 3
I0209 12:59:40.402499       8 log.go:172] (0xc0016822c0) Data frame received for 1
I0209 12:59:40.402620       8 log.go:172] (0xc0016822c0) (0xc0003860a0) Stream removed, broadcasting: 5
I0209 12:59:40.402917       8 log.go:172] (0xc000648c80) (1) Data frame handling
I0209 12:59:40.402965       8 log.go:172] (0xc000648c80) (1) Data frame sent
I0209 12:59:40.403001       8 log.go:172] (0xc0016822c0) (0xc000648c80) Stream removed, broadcasting: 1
I0209 12:59:40.403070       8 log.go:172] (0xc0016822c0) Go away received
I0209 12:59:40.403712       8 log.go:172] (0xc0016822c0) (0xc000648c80) Stream removed, broadcasting: 1
I0209 12:59:40.403745       8 log.go:172] (0xc0016822c0) (0xc001f25f40) Stream removed, broadcasting: 3
I0209 12:59:40.403777       8 log.go:172] (0xc0016822c0) (0xc0003860a0) Stream removed, broadcasting: 5
Feb  9 12:59:40.403: INFO: Exec stderr: ""
Feb  9 12:59:40.404: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:40.404: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:40.466098       8 log.go:172] (0xc001682790) (0xc000649400) Create stream
I0209 12:59:40.466223       8 log.go:172] (0xc001682790) (0xc000649400) Stream added, broadcasting: 1
I0209 12:59:40.476415       8 log.go:172] (0xc001682790) Reply frame received for 1
I0209 12:59:40.476556       8 log.go:172] (0xc001682790) (0xc0014cd9a0) Create stream
I0209 12:59:40.476573       8 log.go:172] (0xc001682790) (0xc0014cd9a0) Stream added, broadcasting: 3
I0209 12:59:40.478580       8 log.go:172] (0xc001682790) Reply frame received for 3
I0209 12:59:40.478637       8 log.go:172] (0xc001682790) (0xc00074d9a0) Create stream
I0209 12:59:40.478660       8 log.go:172] (0xc001682790) (0xc00074d9a0) Stream added, broadcasting: 5
I0209 12:59:40.480323       8 log.go:172] (0xc001682790) Reply frame received for 5
I0209 12:59:40.663179       8 log.go:172] (0xc001682790) Data frame received for 3
I0209 12:59:40.663308       8 log.go:172] (0xc0014cd9a0) (3) Data frame handling
I0209 12:59:40.663359       8 log.go:172] (0xc0014cd9a0) (3) Data frame sent
I0209 12:59:40.766283       8 log.go:172] (0xc001682790) (0xc0014cd9a0) Stream removed, broadcasting: 3
I0209 12:59:40.766412       8 log.go:172] (0xc001682790) Data frame received for 1
I0209 12:59:40.766444       8 log.go:172] (0xc001682790) (0xc00074d9a0) Stream removed, broadcasting: 5
I0209 12:59:40.766492       8 log.go:172] (0xc000649400) (1) Data frame handling
I0209 12:59:40.766516       8 log.go:172] (0xc000649400) (1) Data frame sent
I0209 12:59:40.766524       8 log.go:172] (0xc001682790) (0xc000649400) Stream removed, broadcasting: 1
I0209 12:59:40.766533       8 log.go:172] (0xc001682790) Go away received
I0209 12:59:40.767017       8 log.go:172] (0xc001682790) (0xc000649400) Stream removed, broadcasting: 1
I0209 12:59:40.767043       8 log.go:172] (0xc001682790) (0xc0014cd9a0) Stream removed, broadcasting: 3
I0209 12:59:40.767053       8 log.go:172] (0xc001682790) (0xc00074d9a0) Stream removed, broadcasting: 5
Feb  9 12:59:40.767: INFO: Exec stderr: ""
Feb  9 12:59:40.767: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:40.767: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:40.832483       8 log.go:172] (0xc001d722c0) (0xc0003a7540) Create stream
I0209 12:59:40.832590       8 log.go:172] (0xc001d722c0) (0xc0003a7540) Stream added, broadcasting: 1
I0209 12:59:40.838024       8 log.go:172] (0xc001d722c0) Reply frame received for 1
I0209 12:59:40.838105       8 log.go:172] (0xc001d722c0) (0xc001f32320) Create stream
I0209 12:59:40.838124       8 log.go:172] (0xc001d722c0) (0xc001f32320) Stream added, broadcasting: 3
I0209 12:59:40.839621       8 log.go:172] (0xc001d722c0) Reply frame received for 3
I0209 12:59:40.839709       8 log.go:172] (0xc001d722c0) (0xc00074dae0) Create stream
I0209 12:59:40.839739       8 log.go:172] (0xc001d722c0) (0xc00074dae0) Stream added, broadcasting: 5
I0209 12:59:40.840742       8 log.go:172] (0xc001d722c0) Reply frame received for 5
I0209 12:59:40.944421       8 log.go:172] (0xc001d722c0) Data frame received for 3
I0209 12:59:40.944561       8 log.go:172] (0xc001f32320) (3) Data frame handling
I0209 12:59:40.944609       8 log.go:172] (0xc001f32320) (3) Data frame sent
I0209 12:59:41.054578       8 log.go:172] (0xc001d722c0) Data frame received for 1
I0209 12:59:41.054700       8 log.go:172] (0xc0003a7540) (1) Data frame handling
I0209 12:59:41.054745       8 log.go:172] (0xc0003a7540) (1) Data frame sent
I0209 12:59:41.055980       8 log.go:172] (0xc001d722c0) (0xc0003a7540) Stream removed, broadcasting: 1
I0209 12:59:41.056104       8 log.go:172] (0xc001d722c0) (0xc001f32320) Stream removed, broadcasting: 3
I0209 12:59:41.056296       8 log.go:172] (0xc001d722c0) (0xc00074dae0) Stream removed, broadcasting: 5
I0209 12:59:41.056444       8 log.go:172] (0xc001d722c0) Go away received
I0209 12:59:41.056584       8 log.go:172] (0xc001d722c0) (0xc0003a7540) Stream removed, broadcasting: 1
I0209 12:59:41.056620       8 log.go:172] (0xc001d722c0) (0xc001f32320) Stream removed, broadcasting: 3
I0209 12:59:41.056633       8 log.go:172] (0xc001d722c0) (0xc00074dae0) Stream removed, broadcasting: 5
Feb  9 12:59:41.056: INFO: Exec stderr: ""
Feb  9 12:59:41.056: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w9cxh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 12:59:41.056: INFO: >>> kubeConfig: /root/.kube/config
I0209 12:59:41.125682       8 log.go:172] (0xc001d72790) (0xc0005f8a00) Create stream
I0209 12:59:41.125870       8 log.go:172] (0xc001d72790) (0xc0005f8a00) Stream added, broadcasting: 1
I0209 12:59:41.133189       8 log.go:172] (0xc001d72790) Reply frame received for 1
I0209 12:59:41.133243       8 log.go:172] (0xc001d72790) (0xc0014cda40) Create stream
I0209 12:59:41.133263       8 log.go:172] (0xc001d72790) (0xc0014cda40) Stream added, broadcasting: 3
I0209 12:59:41.134542       8 log.go:172] (0xc001d72790) Reply frame received for 3
I0209 12:59:41.134601       8 log.go:172] (0xc001d72790) (0xc0014cdae0) Create stream
I0209 12:59:41.134613       8 log.go:172] (0xc001d72790) (0xc0014cdae0) Stream added, broadcasting: 5
I0209 12:59:41.136019       8 log.go:172] (0xc001d72790) Reply frame received for 5
I0209 12:59:41.279128       8 log.go:172] (0xc001d72790) Data frame received for 3
I0209 12:59:41.279217       8 log.go:172] (0xc0014cda40) (3) Data frame handling
I0209 12:59:41.279261       8 log.go:172] (0xc0014cda40) (3) Data frame sent
I0209 12:59:41.415650       8 log.go:172] (0xc001d72790) Data frame received for 1
I0209 12:59:41.416127       8 log.go:172] (0xc001d72790) (0xc0014cdae0) Stream removed, broadcasting: 5
I0209 12:59:41.416308       8 log.go:172] (0xc0005f8a00) (1) Data frame handling
I0209 12:59:41.416356       8 log.go:172] (0xc0005f8a00) (1) Data frame sent
I0209 12:59:41.416450       8 log.go:172] (0xc001d72790) (0xc0014cda40) Stream removed, broadcasting: 3
I0209 12:59:41.416555       8 log.go:172] (0xc001d72790) (0xc0005f8a00) Stream removed, broadcasting: 1
I0209 12:59:41.416623       8 log.go:172] (0xc001d72790) Go away received
I0209 12:59:41.416975       8 log.go:172] (0xc001d72790) (0xc0005f8a00) Stream removed, broadcasting: 1
I0209 12:59:41.416998       8 log.go:172] (0xc001d72790) (0xc0014cda40) Stream removed, broadcasting: 3
I0209 12:59:41.417013       8 log.go:172] (0xc001d72790) (0xc0014cdae0) Stream removed, broadcasting: 5
Feb  9 12:59:41.417: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 12:59:41.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-w9cxh" for this suite.
Feb  9 13:00:37.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:00:37.637: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-w9cxh, resource: bindings, ignored listing per whitelist
Feb  9 13:00:37.711: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-w9cxh deletion completed in 56.282220027s

• [SLOW TEST:91.458 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:00:37.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9rj5b
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 13:00:38.083: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 13:01:16.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9rj5b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:01:16.392: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:01:16.496180       8 log.go:172] (0xc0018060b0) (0xc001e4e3c0) Create stream
I0209 13:01:16.496479       8 log.go:172] (0xc0018060b0) (0xc001e4e3c0) Stream added, broadcasting: 1
I0209 13:01:16.507216       8 log.go:172] (0xc0018060b0) Reply frame received for 1
I0209 13:01:16.507354       8 log.go:172] (0xc0018060b0) (0xc001a86320) Create stream
I0209 13:01:16.507390       8 log.go:172] (0xc0018060b0) (0xc001a86320) Stream added, broadcasting: 3
I0209 13:01:16.509920       8 log.go:172] (0xc0018060b0) Reply frame received for 3
I0209 13:01:16.509994       8 log.go:172] (0xc0018060b0) (0xc001680500) Create stream
I0209 13:01:16.510012       8 log.go:172] (0xc0018060b0) (0xc001680500) Stream added, broadcasting: 5
I0209 13:01:16.511569       8 log.go:172] (0xc0018060b0) Reply frame received for 5
I0209 13:01:16.713597       8 log.go:172] (0xc0018060b0) Data frame received for 3
I0209 13:01:16.713779       8 log.go:172] (0xc001a86320) (3) Data frame handling
I0209 13:01:16.713892       8 log.go:172] (0xc001a86320) (3) Data frame sent
I0209 13:01:16.915997       8 log.go:172] (0xc0018060b0) (0xc001680500) Stream removed, broadcasting: 5
I0209 13:01:16.916834       8 log.go:172] (0xc0018060b0) Data frame received for 1
I0209 13:01:16.917344       8 log.go:172] (0xc0018060b0) (0xc001a86320) Stream removed, broadcasting: 3
I0209 13:01:16.917777       8 log.go:172] (0xc001e4e3c0) (1) Data frame handling
I0209 13:01:16.917944       8 log.go:172] (0xc001e4e3c0) (1) Data frame sent
I0209 13:01:16.918030       8 log.go:172] (0xc0018060b0) (0xc001e4e3c0) Stream removed, broadcasting: 1
I0209 13:01:16.918102       8 log.go:172] (0xc0018060b0) Go away received
I0209 13:01:16.918651       8 log.go:172] (0xc0018060b0) (0xc001e4e3c0) Stream removed, broadcasting: 1
I0209 13:01:16.918692       8 log.go:172] (0xc0018060b0) (0xc001a86320) Stream removed, broadcasting: 3
I0209 13:01:16.918713       8 log.go:172] (0xc0018060b0) (0xc001680500) Stream removed, broadcasting: 5
Feb  9 13:01:16.919: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:01:16.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-9rj5b" for this suite.
Feb  9 13:01:42.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:01:43.078: INFO: namespace: e2e-tests-pod-network-test-9rj5b, resource: bindings, ignored listing per whitelist
Feb  9 13:01:43.115: INFO: namespace e2e-tests-pod-network-test-9rj5b deletion completed in 26.177324258s

• [SLOW TEST:65.404 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:01:43.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:01:43.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-vkmf2" to be "success or failure"
Feb  9 13:01:43.401: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.804595ms
Feb  9 13:01:45.417: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028889831s
Feb  9 13:01:47.438: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04970962s
Feb  9 13:01:49.447: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059131788s
Feb  9 13:01:51.466: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07750851s
Feb  9 13:01:53.557: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169008321s
STEP: Saw pod success
Feb  9 13:01:53.557: INFO: Pod "downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:01:53.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005 container client-container: 
STEP: delete the pod
Feb  9 13:01:53.758: INFO: Waiting for pod downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:01:53.925: INFO: Pod downwardapi-volume-51f99a57-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:01:53.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vkmf2" for this suite.
Feb  9 13:02:00.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:02:00.168: INFO: namespace: e2e-tests-downward-api-vkmf2, resource: bindings, ignored listing per whitelist
Feb  9 13:02:00.284: INFO: namespace e2e-tests-downward-api-vkmf2 deletion completed in 6.333006406s

• [SLOW TEST:17.168 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:02:00.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5c33fc40-4b3c-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  9 13:02:00.563: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-pl5t5" to be "success or failure"
Feb  9 13:02:00.576: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.448412ms
Feb  9 13:02:02.619: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055515196s
Feb  9 13:02:04.630: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066423111s
Feb  9 13:02:07.010: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446620494s
Feb  9 13:02:09.078: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514200925s
Feb  9 13:02:11.290: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.726439277s
Feb  9 13:02:13.305: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.741004689s
STEP: Saw pod success
Feb  9 13:02:13.305: INFO: Pod "pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:02:13.312: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 13:02:14.708: INFO: Waiting for pod pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:02:14.832: INFO: Pod pod-projected-configmaps-5c35e040-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:02:14.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pl5t5" for this suite.
Feb  9 13:02:21.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:02:21.169: INFO: namespace: e2e-tests-projected-pl5t5, resource: bindings, ignored listing per whitelist
Feb  9 13:02:21.192: INFO: namespace e2e-tests-projected-pl5t5 deletion completed in 6.335721912s

• [SLOW TEST:20.908 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:02:21.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-68b2dffb-4b3c-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  9 13:02:21.616: INFO: Waiting up to 5m0s for pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-secrets-s56dh" to be "success or failure"
Feb  9 13:02:21.630: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.879569ms
Feb  9 13:02:23.878: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261614211s
Feb  9 13:02:26.001: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384995881s
Feb  9 13:02:28.628: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.011500997s
Feb  9 13:02:30.642: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.025487109s
Feb  9 13:02:32.673: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056489553s
Feb  9 13:02:34.724: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.108275633s
STEP: Saw pod success
Feb  9 13:02:34.725: INFO: Pod "pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:02:34.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb  9 13:02:35.035: INFO: Waiting for pod pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:02:35.056: INFO: Pod pod-secrets-68bd4e43-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:02:35.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s56dh" for this suite.
Feb  9 13:02:41.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:02:41.342: INFO: namespace: e2e-tests-secrets-s56dh, resource: bindings, ignored listing per whitelist
Feb  9 13:02:41.358: INFO: namespace e2e-tests-secrets-s56dh deletion completed in 6.282915911s

• [SLOW TEST:20.165 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:02:41.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-fpkb
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 13:02:41.587: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fpkb" in namespace "e2e-tests-subpath-bkn69" to be "success or failure"
Feb  9 13:02:41.686: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 98.879017ms
Feb  9 13:02:43.726: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13827708s
Feb  9 13:02:45.747: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160113393s
Feb  9 13:02:48.710: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.123112429s
Feb  9 13:02:50.726: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.138647517s
Feb  9 13:02:52.839: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.25142968s
Feb  9 13:02:55.044: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.456584815s
Feb  9 13:02:57.060: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.47245301s
Feb  9 13:02:59.085: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.497386669s
Feb  9 13:03:01.100: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 19.512984295s
Feb  9 13:03:03.118: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 21.531137842s
Feb  9 13:03:05.140: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 23.552919119s
Feb  9 13:03:07.167: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 25.579836373s
Feb  9 13:03:09.192: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 27.604692434s
Feb  9 13:03:11.208: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 29.621009408s
Feb  9 13:03:13.229: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 31.641647347s
Feb  9 13:03:15.241: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Running", Reason="", readiness=false. Elapsed: 33.654210025s
Feb  9 13:03:17.260: INFO: Pod "pod-subpath-test-secret-fpkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.672588151s
STEP: Saw pod success
Feb  9 13:03:17.260: INFO: Pod "pod-subpath-test-secret-fpkb" satisfied condition "success or failure"
Feb  9 13:03:17.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-fpkb container test-container-subpath-secret-fpkb: 
STEP: delete the pod
Feb  9 13:03:17.444: INFO: Waiting for pod pod-subpath-test-secret-fpkb to disappear
Feb  9 13:03:17.472: INFO: Pod pod-subpath-test-secret-fpkb no longer exists
STEP: Deleting pod pod-subpath-test-secret-fpkb
Feb  9 13:03:17.473: INFO: Deleting pod "pod-subpath-test-secret-fpkb" in namespace "e2e-tests-subpath-bkn69"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:03:17.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-bkn69" for this suite.
Feb  9 13:03:23.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:03:23.865: INFO: namespace: e2e-tests-subpath-bkn69, resource: bindings, ignored listing per whitelist
Feb  9 13:03:23.960: INFO: namespace e2e-tests-subpath-bkn69 deletion completed in 6.462005392s

• [SLOW TEST:42.602 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:03:23.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb  9 13:03:24.193: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:03:42.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-5pnz4" for this suite.
Feb  9 13:03:51.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:03:51.206: INFO: namespace: e2e-tests-init-container-5pnz4, resource: bindings, ignored listing per whitelist
Feb  9 13:03:51.403: INFO: namespace e2e-tests-init-container-5pnz4 deletion completed in 8.500088075s

• [SLOW TEST:27.442 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:03:51.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb  9 13:03:51.611: INFO: Waiting up to 5m0s for pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-downward-api-67bpm" to be "success or failure"
Feb  9 13:03:51.627: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.576537ms
Feb  9 13:03:53.651: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039625655s
Feb  9 13:03:55.669: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057680855s
Feb  9 13:03:57.719: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107386776s
Feb  9 13:03:59.731: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119931736s
STEP: Saw pod success
Feb  9 13:03:59.731: INFO: Pod "downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:03:59.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb  9 13:03:59.809: INFO: Waiting for pod downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:03:59.912: INFO: Pod downward-api-9e6811c9-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:03:59.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-67bpm" for this suite.
Feb  9 13:04:05.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:04:06.062: INFO: namespace: e2e-tests-downward-api-67bpm, resource: bindings, ignored listing per whitelist
Feb  9 13:04:06.179: INFO: namespace e2e-tests-downward-api-67bpm deletion completed in 6.249938499s

• [SLOW TEST:14.776 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:04:06.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0209 13:04:08.626409       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 13:04:08.626: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:04:08.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cw782" for this suite.
Feb  9 13:04:14.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:04:14.905: INFO: namespace: e2e-tests-gc-cw782, resource: bindings, ignored listing per whitelist
Feb  9 13:04:15.015: INFO: namespace e2e-tests-gc-cw782 deletion completed in 6.357802822s

• [SLOW TEST:8.835 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:04:15.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  9 13:04:15.239: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093129,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 13:04:15.239: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093129,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  9 13:04:25.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093141,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  9 13:04:25.287: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093141,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  9 13:04:35.320: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093155,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 13:04:35.321: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093155,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  9 13:04:45.383: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093168,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 13:04:45.384: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-a,UID:ac7ea66e-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093168,Generation:0,CreationTimestamp:2020-02-09 13:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  9 13:04:55.410: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-b,UID:c46e88ea-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093181,Generation:0,CreationTimestamp:2020-02-09 13:04:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 13:04:55.410: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-b,UID:c46e88ea-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093181,Generation:0,CreationTimestamp:2020-02-09 13:04:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  9 13:05:05.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-b,UID:c46e88ea-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093194,Generation:0,CreationTimestamp:2020-02-09 13:04:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 13:05:05.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-s4xp7,SelfLink:/api/v1/namespaces/e2e-tests-watch-s4xp7/configmaps/e2e-watch-test-configmap-b,UID:c46e88ea-4b3c-11ea-a994-fa163e34d433,ResourceVersion:21093194,Generation:0,CreationTimestamp:2020-02-09 13:04:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:05:15.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-s4xp7" for this suite.
Feb  9 13:05:21.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:05:21.596: INFO: namespace: e2e-tests-watch-s4xp7, resource: bindings, ignored listing per whitelist
Feb  9 13:05:21.629: INFO: namespace e2e-tests-watch-s4xp7 deletion completed in 6.172954991s

• [SLOW TEST:66.614 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:05:21.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-d4261135-4b3c-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb  9 13:05:21.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-configmap-m79bf" to be "success or failure"
Feb  9 13:05:21.880: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.780049ms
Feb  9 13:05:24.596: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.732291666s
Feb  9 13:05:26.644: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.780526573s
Feb  9 13:05:28.667: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803824704s
Feb  9 13:05:31.251: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.387278223s
Feb  9 13:05:33.275: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.412028575s
Feb  9 13:05:35.294: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.430431364s
STEP: Saw pod success
Feb  9 13:05:35.294: INFO: Pod "pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:05:35.572: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb  9 13:05:35.858: INFO: Waiting for pod pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:05:35.888: INFO: Pod pod-configmaps-d4305abb-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:05:35.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m79bf" for this suite.
Feb  9 13:05:41.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:05:42.182: INFO: namespace: e2e-tests-configmap-m79bf, resource: bindings, ignored listing per whitelist
Feb  9 13:05:42.182: INFO: namespace e2e-tests-configmap-m79bf deletion completed in 6.283114203s

• [SLOW TEST:20.552 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:05:42.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  9 13:05:42.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb  9 13:05:42.411: INFO: stderr: ""
Feb  9 13:05:42.411: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb  9 13:05:42.421: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:05:42.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vcqds" for this suite.
Feb  9 13:05:48.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:05:48.979: INFO: namespace: e2e-tests-kubectl-vcqds, resource: bindings, ignored listing per whitelist
Feb  9 13:05:48.981: INFO: namespace e2e-tests-kubectl-vcqds deletion completed in 6.501117211s

S [SKIPPING] [6.799 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb  9 13:05:42.421: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:05:48.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e472abae-4b3c-11ea-aa78-0242ac110005
STEP: Creating a pod to test consume secrets
Feb  9 13:05:49.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005" in namespace "e2e-tests-projected-4rrh7" to be "success or failure"
Feb  9 13:05:49.143: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.559178ms
Feb  9 13:05:51.163: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040321772s
Feb  9 13:05:53.176: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053977287s
Feb  9 13:05:55.194: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07162072s
Feb  9 13:05:57.262: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139397038s
STEP: Saw pod success
Feb  9 13:05:57.262: INFO: Pod "pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005" satisfied condition "success or failure"
Feb  9 13:05:57.270: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 13:05:57.357: INFO: Waiting for pod pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005 to disappear
Feb  9 13:05:57.436: INFO: Pod pod-projected-secrets-e47382b0-4b3c-11ea-aa78-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:05:57.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4rrh7" for this suite.
Feb  9 13:06:03.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:06:03.530: INFO: namespace: e2e-tests-projected-4rrh7, resource: bindings, ignored listing per whitelist
Feb  9 13:06:03.671: INFO: namespace e2e-tests-projected-4rrh7 deletion completed in 6.226301617s

• [SLOW TEST:14.689 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:06:03.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb  9 13:06:04.105: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  9 13:06:04.280: INFO: Number of nodes with available pods: 0
Feb  9 13:06:04.280: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  9 13:06:04.357: INFO: Number of nodes with available pods: 0
Feb  9 13:06:04.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:06.071: INFO: Number of nodes with available pods: 0
Feb  9 13:06:06.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:06.386: INFO: Number of nodes with available pods: 0
Feb  9 13:06:06.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:07.376: INFO: Number of nodes with available pods: 0
Feb  9 13:06:07.376: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:08.386: INFO: Number of nodes with available pods: 0
Feb  9 13:06:08.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:10.530: INFO: Number of nodes with available pods: 0
Feb  9 13:06:10.530: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:11.618: INFO: Number of nodes with available pods: 0
Feb  9 13:06:11.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:12.379: INFO: Number of nodes with available pods: 0
Feb  9 13:06:12.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:13.379: INFO: Number of nodes with available pods: 0
Feb  9 13:06:13.379: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:14.382: INFO: Number of nodes with available pods: 1
Feb  9 13:06:14.382: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  9 13:06:14.482: INFO: Number of nodes with available pods: 1
Feb  9 13:06:14.483: INFO: Number of running nodes: 0, number of available pods: 1
Feb  9 13:06:15.503: INFO: Number of nodes with available pods: 0
Feb  9 13:06:15.503: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  9 13:06:15.641: INFO: Number of nodes with available pods: 0
Feb  9 13:06:15.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:16.654: INFO: Number of nodes with available pods: 0
Feb  9 13:06:16.655: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:17.732: INFO: Number of nodes with available pods: 0
Feb  9 13:06:17.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:18.687: INFO: Number of nodes with available pods: 0
Feb  9 13:06:18.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:19.879: INFO: Number of nodes with available pods: 0
Feb  9 13:06:19.879: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:20.665: INFO: Number of nodes with available pods: 0
Feb  9 13:06:20.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:21.653: INFO: Number of nodes with available pods: 0
Feb  9 13:06:21.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:22.658: INFO: Number of nodes with available pods: 0
Feb  9 13:06:22.658: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:23.657: INFO: Number of nodes with available pods: 0
Feb  9 13:06:23.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:24.664: INFO: Number of nodes with available pods: 0
Feb  9 13:06:24.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:25.662: INFO: Number of nodes with available pods: 0
Feb  9 13:06:25.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:26.671: INFO: Number of nodes with available pods: 0
Feb  9 13:06:26.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:27.693: INFO: Number of nodes with available pods: 0
Feb  9 13:06:27.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:28.660: INFO: Number of nodes with available pods: 0
Feb  9 13:06:28.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:29.887: INFO: Number of nodes with available pods: 0
Feb  9 13:06:29.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:30.672: INFO: Number of nodes with available pods: 0
Feb  9 13:06:30.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:31.707: INFO: Number of nodes with available pods: 0
Feb  9 13:06:31.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:32.664: INFO: Number of nodes with available pods: 0
Feb  9 13:06:32.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:33.657: INFO: Number of nodes with available pods: 0
Feb  9 13:06:33.658: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:34.665: INFO: Number of nodes with available pods: 0
Feb  9 13:06:34.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:35.904: INFO: Number of nodes with available pods: 0
Feb  9 13:06:35.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:36.693: INFO: Number of nodes with available pods: 0
Feb  9 13:06:36.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:37.665: INFO: Number of nodes with available pods: 0
Feb  9 13:06:37.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:38.671: INFO: Number of nodes with available pods: 0
Feb  9 13:06:38.671: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:39.666: INFO: Number of nodes with available pods: 0
Feb  9 13:06:39.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:40.732: INFO: Number of nodes with available pods: 0
Feb  9 13:06:40.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:41.656: INFO: Number of nodes with available pods: 0
Feb  9 13:06:41.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:42.661: INFO: Number of nodes with available pods: 0
Feb  9 13:06:42.661: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:43.660: INFO: Number of nodes with available pods: 0
Feb  9 13:06:43.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:06:44.749: INFO: Number of nodes with available pods: 1
Feb  9 13:06:44.749: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6bk2t, will wait for the garbage collector to delete the pods
Feb  9 13:06:44.985: INFO: Deleting DaemonSet.extensions daemon-set took: 154.323638ms
Feb  9 13:06:45.286: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.513606ms
Feb  9 13:07:02.710: INFO: Number of nodes with available pods: 0
Feb  9 13:07:02.710: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 13:07:02.828: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6bk2t/daemonsets","resourceVersion":"21093446"},"items":null}

Feb  9 13:07:02.840: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6bk2t/pods","resourceVersion":"21093447"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:07:03.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6bk2t" for this suite.
Feb  9 13:07:11.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:07:11.293: INFO: namespace: e2e-tests-daemonsets-6bk2t, resource: bindings, ignored listing per whitelist
Feb  9 13:07:11.411: INFO: namespace e2e-tests-daemonsets-6bk2t deletion completed in 8.352164781s

• [SLOW TEST:67.739 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:07:11.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb  9 13:07:11.693: INFO: namespace e2e-tests-kubectl-x8n92
Feb  9 13:07:11.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x8n92'
Feb  9 13:07:14.215: INFO: stderr: ""
Feb  9 13:07:14.215: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  9 13:07:16.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:16.199: INFO: Found 0 / 1
Feb  9 13:07:16.637: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:16.638: INFO: Found 0 / 1
Feb  9 13:07:17.668: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:17.668: INFO: Found 0 / 1
Feb  9 13:07:18.254: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:18.254: INFO: Found 0 / 1
Feb  9 13:07:19.229: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:19.229: INFO: Found 0 / 1
Feb  9 13:07:20.246: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:20.247: INFO: Found 0 / 1
Feb  9 13:07:21.311: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:21.311: INFO: Found 0 / 1
Feb  9 13:07:22.487: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:22.488: INFO: Found 0 / 1
Feb  9 13:07:23.699: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:23.700: INFO: Found 0 / 1
Feb  9 13:07:24.239: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:24.239: INFO: Found 0 / 1
Feb  9 13:07:25.232: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:25.232: INFO: Found 0 / 1
Feb  9 13:07:26.244: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:26.244: INFO: Found 1 / 1
Feb  9 13:07:26.244: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  9 13:07:26.252: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 13:07:26.252: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  9 13:07:26.252: INFO: wait on redis-master startup in e2e-tests-kubectl-x8n92 
Feb  9 13:07:26.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-px4zh redis-master --namespace=e2e-tests-kubectl-x8n92'
Feb  9 13:07:26.688: INFO: stderr: ""
Feb  9 13:07:26.689: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Feb 13:07:25.067 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 13:07:25.068 # Server started, Redis version 3.2.12\n1:M 09 Feb 13:07:25.068 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 13:07:25.068 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  9 13:07:26.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-x8n92'
Feb  9 13:07:26.978: INFO: stderr: ""
Feb  9 13:07:26.978: INFO: stdout: "service/rm2 exposed\n"
Feb  9 13:07:27.024: INFO: Service rm2 in namespace e2e-tests-kubectl-x8n92 found.
STEP: exposing service
Feb  9 13:07:29.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-x8n92'
Feb  9 13:07:29.349: INFO: stderr: ""
Feb  9 13:07:29.350: INFO: stdout: "service/rm3 exposed\n"
Feb  9 13:07:29.452: INFO: Service rm3 in namespace e2e-tests-kubectl-x8n92 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:07:31.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x8n92" for this suite.
Feb  9 13:07:57.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:07:57.705: INFO: namespace: e2e-tests-kubectl-x8n92, resource: bindings, ignored listing per whitelist
Feb  9 13:07:57.726: INFO: namespace e2e-tests-kubectl-x8n92 deletion completed in 26.228028562s

• [SLOW TEST:46.315 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:07:57.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3162afee-4b3d-11ea-aa78-0242ac110005
STEP: Creating secret with name s-test-opt-upd-3162b153-4b3d-11ea-aa78-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3162afee-4b3d-11ea-aa78-0242ac110005
STEP: Updating secret s-test-opt-upd-3162b153-4b3d-11ea-aa78-0242ac110005
STEP: Creating secret with name s-test-opt-create-3162b185-4b3d-11ea-aa78-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:08:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5c9vn" for this suite.
Feb  9 13:08:45.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:08:45.625: INFO: namespace: e2e-tests-secrets-5c9vn, resource: bindings, ignored listing per whitelist
Feb  9 13:08:45.712: INFO: namespace e2e-tests-secrets-5c9vn deletion completed in 26.448736522s

• [SLOW TEST:47.986 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:08:45.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  9 13:11:50.550: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:11:50.676: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:11:52.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:11:52.690: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:11:54.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:11:54.684: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:11:56.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:11:56.698: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:11:58.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:11:58.698: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:00.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:00.726: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:02.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:02.694: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:04.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:04.688: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:06.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:06.707: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:08.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:08.693: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:10.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:10.696: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:12.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:12.734: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:14.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:14.691: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:16.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:16.689: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:18.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:18.698: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:20.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:20.725: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:22.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:22.695: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:24.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:24.696: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:26.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:26.697: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:28.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:28.708: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:30.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:30.695: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:32.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:32.685: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:34.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:34.726: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:36.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:36.697: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:38.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:38.689: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:40.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:40.705: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:42.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:42.698: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:44.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:44.693: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:46.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:46.737: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:48.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:48.695: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:50.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:50.718: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:52.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:52.720: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:54.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:54.702: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:56.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:56.694: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:12:58.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:12:58.690: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:00.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:00.703: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:02.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:02.711: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:04.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:04.702: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:06.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:06.697: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:08.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:08.703: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:10.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:10.694: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:12.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:12.699: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:14.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:14.696: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:16.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:16.712: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:18.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:18.703: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:20.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:20.700: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:22.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:22.713: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:24.681: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:24.697: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:26.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:26.696: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:28.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:28.715: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:30.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:30.691: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:32.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:32.696: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:34.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:34.701: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:36.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:36.728: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:38.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:38.700: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:40.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:40.698: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:13:42.677: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:13:42.705: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:13:42.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-rptzg" for this suite.
Feb  9 13:14:08.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:14:08.855: INFO: namespace: e2e-tests-container-lifecycle-hook-rptzg, resource: bindings, ignored listing per whitelist
Feb  9 13:14:09.153: INFO: namespace e2e-tests-container-lifecycle-hook-rptzg deletion completed in 26.429824759s

• [SLOW TEST:323.440 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb  9 13:14:09.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  9 13:14:09.490: INFO: Number of nodes with available pods: 0
Feb  9 13:14:09.490: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:11.679: INFO: Number of nodes with available pods: 0
Feb  9 13:14:11.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:12.588: INFO: Number of nodes with available pods: 0
Feb  9 13:14:12.589: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:13.524: INFO: Number of nodes with available pods: 0
Feb  9 13:14:13.524: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:14.517: INFO: Number of nodes with available pods: 0
Feb  9 13:14:14.517: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:16.232: INFO: Number of nodes with available pods: 0
Feb  9 13:14:16.232: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:16.533: INFO: Number of nodes with available pods: 0
Feb  9 13:14:16.533: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:17.512: INFO: Number of nodes with available pods: 0
Feb  9 13:14:17.512: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb  9 13:14:18.514: INFO: Number of nodes with available pods: 1
Feb  9 13:14:18.514: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  9 13:14:18.800: INFO: Number of nodes with available pods: 1
Feb  9 13:14:18.800: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pmbdr, will wait for the garbage collector to delete the pods
Feb  9 13:14:19.153: INFO: Deleting DaemonSet.extensions daemon-set took: 115.742797ms
Feb  9 13:14:19.455: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.179745ms
Feb  9 13:14:26.359: INFO: Number of nodes with available pods: 0
Feb  9 13:14:26.359: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 13:14:26.373: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pmbdr/daemonsets","resourceVersion":"21094162"},"items":null}

Feb  9 13:14:26.378: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pmbdr/pods","resourceVersion":"21094162"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb  9 13:14:26.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-pmbdr" for this suite.
Feb  9 13:14:34.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:14:34.643: INFO: namespace: e2e-tests-daemonsets-pmbdr, resource: bindings, ignored listing per whitelist
Feb  9 13:14:34.658: INFO: namespace e2e-tests-daemonsets-pmbdr deletion completed in 8.262090014s

• [SLOW TEST:25.505 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSFeb  9 13:14:34.659: INFO: Running AfterSuite actions on all nodes
Feb  9 13:14:34.659: INFO: Running AfterSuite actions on node 1
Feb  9 13:14:34.659: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8839.761 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS