I0125 10:47:15.351186 8 e2e.go:224] Starting e2e run "0c8fb6c8-3f60-11ea-8a8b-0242ac110006" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579949234 - Will randomize all specs Will run 201 of 2164 specs Jan 25 10:47:15.540: INFO: >>> kubeConfig: /root/.kube/config Jan 25 10:47:15.543: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 25 10:47:15.568: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 25 10:47:15.600: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 25 10:47:15.600: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 25 10:47:15.600: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 25 10:47:15.610: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 25 10:47:15.610: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 25 10:47:15.610: INFO: e2e test version: v1.13.12 Jan 25 10:47:15.612: INFO: kube-apiserver version: v1.13.8 SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:47:15.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Jan 25 10:47:15.834: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 25 10:47:15.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 10:47:15.852: INFO: Waiting for terminating namespaces to be deleted... Jan 25 10:47:15.856: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 25 10:47:15.879: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 25 10:47:15.879: INFO: Container weave ready: true, restart count 0 Jan 25 10:47:15.879: INFO: Container weave-npc ready: true, restart count 0 Jan 25 10:47:15.879: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 25 10:47:15.879: INFO: Container coredns ready: true, restart count 0 Jan 25 10:47:15.879: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 10:47:15.879: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 10:47:15.879: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 10:47:15.879: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 25 10:47:15.879: INFO: Container coredns ready: true, restart count 0 Jan 25 10:47:15.879: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 25 10:47:15.879: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 10:47:15.879: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-13397198-3f60-11ea-8a8b-0242ac110006 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-13397198-3f60-11ea-8a8b-0242ac110006 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-13397198-3f60-11ea-8a8b-0242ac110006 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:47:38.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2k7cs" for this suite. Jan 25 10:47:56.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:47:56.608: INFO: namespace: e2e-tests-sched-pred-2k7cs, resource: bindings, ignored listing per whitelist Jan 25 10:47:56.679: INFO: namespace e2e-tests-sched-pred-2k7cs deletion completed in 18.299182992s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:41.067 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:47:56.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-259dc3b0-3f60-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 10:47:56.886: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-4z6wm" to be "success or failure" Jan 25 10:47:56.895: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848992ms Jan 25 10:47:58.924: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037317343s Jan 25 10:48:00.959: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072191392s Jan 25 10:48:03.098: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211038387s Jan 25 10:48:05.110: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223479614s Jan 25 10:48:07.162: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275800066s STEP: Saw pod success Jan 25 10:48:07.163: INFO: Pod "pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 10:48:07.169: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jan 25 10:48:07.504: INFO: Waiting for pod pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006 to disappear Jan 25 10:48:07.566: INFO: Pod pod-projected-secrets-259e65e1-3f60-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:48:07.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4z6wm" for this suite. Jan 25 10:48:14.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:48:14.817: INFO: namespace: e2e-tests-projected-4z6wm, resource: bindings, ignored listing per whitelist Jan 25 10:48:14.880: INFO: namespace e2e-tests-projected-4z6wm deletion completed in 7.302578366s • [SLOW TEST:18.201 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:48:14.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 25 10:48:15.572: INFO: Waiting up to 5m0s for pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs" in namespace "e2e-tests-svcaccounts-9jfvp" to be "success or failure" Jan 25 10:48:15.614: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 41.826123ms Jan 25 10:48:18.106: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533944667s Jan 25 10:48:20.129: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.55707019s Jan 25 10:48:22.161: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588810133s Jan 25 10:48:24.310: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737380701s Jan 25 10:48:26.330: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.758099098s Jan 25 10:48:28.351: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778842999s Jan 25 10:48:30.378: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.805995027s Jan 25 10:48:32.574: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.002060171s STEP: Saw pod success Jan 25 10:48:32.575: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs" satisfied condition "success or failure" Jan 25 10:48:32.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs container token-test: STEP: delete the pod Jan 25 10:48:33.067: INFO: Waiting for pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs to disappear Jan 25 10:48:33.139: INFO: Pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-c8kqs no longer exists STEP: Creating a pod to test consume service account root CA Jan 25 10:48:33.156: INFO: Waiting up to 5m0s for pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9" in namespace "e2e-tests-svcaccounts-9jfvp" to be "success or failure" Jan 25 10:48:33.182: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.837621ms Jan 25 10:48:35.194: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038304537s Jan 25 10:48:37.213: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056732493s Jan 25 10:48:39.223: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066623082s Jan 25 10:48:41.760: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604270265s Jan 25 10:48:43.981: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824714042s Jan 25 10:48:46.177: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.020607261s Jan 25 10:48:48.186: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.030054505s STEP: Saw pod success Jan 25 10:48:48.186: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9" satisfied condition "success or failure" Jan 25 10:48:48.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9 container root-ca-test: STEP: delete the pod Jan 25 10:48:49.601: INFO: Waiting for pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9 to disappear Jan 25 10:48:49.626: INFO: Pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-96vk9 no longer exists STEP: Creating a pod to test consume service account namespace Jan 25 10:48:49.679: INFO: Waiting up to 5m0s for pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw" in namespace "e2e-tests-svcaccounts-9jfvp" to be "success or failure" Jan 25 10:48:49.779: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 99.993367ms Jan 25 10:48:51.870: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191694724s Jan 25 10:48:53.890: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211327641s Jan 25 10:48:56.186: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.507672053s Jan 25 10:48:58.298: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619269821s Jan 25 10:49:00.679: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 11.000714696s Jan 25 10:49:02.712: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Pending", Reason="", readiness=false. Elapsed: 13.032752508s Jan 25 10:49:04.741: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.062111543s STEP: Saw pod success Jan 25 10:49:04.741: INFO: Pod "pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw" satisfied condition "success or failure" Jan 25 10:49:04.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw container namespace-test: STEP: delete the pod Jan 25 10:49:05.037: INFO: Waiting for pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw to disappear Jan 25 10:49:05.057: INFO: Pod pod-service-account-30c07796-3f60-11ea-8a8b-0242ac110006-dnbtw no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:49:05.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-9jfvp" for this suite. Jan 25 10:49:13.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:49:13.246: INFO: namespace: e2e-tests-svcaccounts-9jfvp, resource: bindings, ignored listing per whitelist Jan 25 10:49:13.284: INFO: namespace e2e-tests-svcaccounts-9jfvp deletion completed in 8.154316376s • [SLOW TEST:58.404 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:49:13.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-kh6zm [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-kh6zm STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-kh6zm Jan 25 10:49:13.729: INFO: Found 0 stateful pods, waiting for 1 Jan 25 10:49:23.768: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 25 10:49:33.763: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 25 10:49:33.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 10:49:34.961: INFO: stderr: "I0125 10:49:34.228387 40 log.go:172] (0xc00014c160) (0xc0005a26e0) Create stream\nI0125 10:49:34.228676 40 log.go:172] (0xc00014c160) (0xc0005a26e0) Stream added, broadcasting: 1\nI0125 10:49:34.237040 40 log.go:172] (0xc00014c160) Reply frame received for 1\nI0125 10:49:34.237289 40 log.go:172] (0xc00014c160) (0xc0003cabe0) Create stream\nI0125 10:49:34.237318 40 log.go:172] (0xc00014c160) (0xc0003cabe0) Stream added, broadcasting: 3\nI0125 10:49:34.239008 40 log.go:172] (0xc00014c160) Reply frame received for 3\nI0125 10:49:34.239070 40 log.go:172] (0xc00014c160) (0xc000300000) Create stream\nI0125 10:49:34.239081 40 log.go:172] (0xc00014c160) (0xc000300000) Stream added, broadcasting: 5\nI0125 10:49:34.240128 40 log.go:172] (0xc00014c160) Reply frame received for 5\nI0125 10:49:34.531031 40 log.go:172] (0xc00014c160) Data frame received for 3\nI0125 10:49:34.531180 40 log.go:172] (0xc0003cabe0) (3) Data frame handling\nI0125 10:49:34.531219 40 log.go:172] (0xc0003cabe0) (3) Data frame sent\nI0125 10:49:34.941445 40 log.go:172] (0xc00014c160) (0xc000300000) Stream removed, broadcasting: 5\nI0125 10:49:34.941546 40 log.go:172] (0xc00014c160) Data frame received for 1\nI0125 10:49:34.941604 40 log.go:172] (0xc00014c160) (0xc0003cabe0) Stream removed, broadcasting: 3\nI0125 10:49:34.941674 40 log.go:172] (0xc0005a26e0) (1) Data frame handling\nI0125 10:49:34.941689 40 log.go:172] (0xc0005a26e0) (1) Data frame sent\nI0125 10:49:34.941701 40 log.go:172] (0xc00014c160) (0xc0005a26e0) Stream removed, broadcasting: 1\nI0125 10:49:34.941723 40 log.go:172] (0xc00014c160) Go away received\nI0125 10:49:34.942251 40 log.go:172] (0xc00014c160) (0xc0005a26e0) Stream removed, broadcasting: 1\nI0125 10:49:34.942262 40 log.go:172] (0xc00014c160) (0xc0003cabe0) Stream removed, broadcasting: 3\nI0125 10:49:34.942268 40 log.go:172] (0xc00014c160) (0xc000300000) Stream removed, broadcasting: 5\n" Jan 25 10:49:34.962: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 10:49:34.962: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 25 10:49:34.981: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 25 10:49:45.002: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 25 10:49:45.002: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 10:49:45.067: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:49:45.067: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:49:45.067: INFO: Jan 25 10:49:45.067: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 25 10:49:46.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976709389s Jan 25 10:49:47.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.957165406s Jan 25 10:49:48.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.81058824s Jan 25 10:49:49.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.642701993s Jan 25 10:49:50.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.428677729s Jan 25 10:49:51.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.296989719s Jan 25 10:49:52.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.177797264s Jan 25 10:49:54.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.155776396s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-kh6zm Jan 25 10:49:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:49:59.840: INFO: stderr: "I0125 10:49:59.329927 61 log.go:172] (0xc0006a82c0) (0xc0006d4640) Create stream\nI0125 10:49:59.330170 61 log.go:172] (0xc0006a82c0) (0xc0006d4640) Stream added, broadcasting: 1\nI0125 10:49:59.340801 61 log.go:172] (0xc0006a82c0) Reply frame received for 1\nI0125 10:49:59.340916 61 log.go:172] (0xc0006a82c0) (0xc000022c80) Create stream\nI0125 10:49:59.340930 61 log.go:172] (0xc0006a82c0) (0xc000022c80) Stream added, broadcasting: 3\nI0125 10:49:59.343235 61 log.go:172] (0xc0006a82c0) Reply frame received for 3\nI0125 10:49:59.343264 61 log.go:172] (0xc0006a82c0) (0xc0006d46e0) Create stream\nI0125 10:49:59.343274 61 log.go:172] (0xc0006a82c0) (0xc0006d46e0) Stream added, broadcasting: 5\nI0125 10:49:59.344309 61 log.go:172] (0xc0006a82c0) Reply frame received for 5\nI0125 10:49:59.668045 61 log.go:172] (0xc0006a82c0) Data frame received for 3\nI0125 10:49:59.668119 61 log.go:172] (0xc000022c80) (3) Data frame handling\nI0125 10:49:59.668141 61 log.go:172] (0xc000022c80) (3) Data frame sent\nI0125 10:49:59.832452 61 log.go:172] (0xc0006a82c0) Data frame received for 1\nI0125 10:49:59.832561 61 log.go:172] (0xc0006a82c0) (0xc000022c80) Stream removed, broadcasting: 3\nI0125 10:49:59.832582 61 log.go:172] (0xc0006d4640) (1) Data frame handling\nI0125 10:49:59.832594 61 log.go:172] (0xc0006a82c0) (0xc0006d46e0) Stream removed, broadcasting: 5\nI0125 10:49:59.832627 61 log.go:172] (0xc0006d4640) (1) Data frame sent\nI0125 10:49:59.832637 61 log.go:172] (0xc0006a82c0) (0xc0006d4640) Stream removed, broadcasting: 1\nI0125 10:49:59.832645 61 log.go:172] (0xc0006a82c0) Go away received\nI0125 10:49:59.833228 61 log.go:172] (0xc0006a82c0) (0xc0006d4640) Stream removed, broadcasting: 1\nI0125 10:49:59.833256 61 log.go:172] (0xc0006a82c0) (0xc000022c80) Stream removed, broadcasting: 3\nI0125 10:49:59.833270 61 log.go:172] (0xc0006a82c0) (0xc0006d46e0) Stream removed, broadcasting: 5\n" Jan 25 10:49:59.841: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 25 10:49:59.841: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 25 10:49:59.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:02.043: INFO: rc: 1 Jan 25 10:50:02.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0015d6180 exit status 1 true [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430c10 0xc000430e10] [0x935700 0x935700] 0xc0019521e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 25 10:50:12.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:12.965: INFO: stderr: "I0125 10:50:12.649485 106 log.go:172] (0xc000152fd0) (0xc0005437c0) Create stream\nI0125 10:50:12.649762 106 log.go:172] (0xc000152fd0) (0xc0005437c0) Stream added, broadcasting: 1\nI0125 10:50:12.667003 106 log.go:172] (0xc000152fd0) Reply frame received for 1\nI0125 10:50:12.667173 106 log.go:172] (0xc000152fd0) (0xc000542b40) Create stream\nI0125 10:50:12.667193 106 log.go:172] (0xc000152fd0) (0xc000542b40) Stream added, broadcasting: 3\nI0125 10:50:12.668480 106 log.go:172] (0xc000152fd0) Reply frame received for 3\nI0125 10:50:12.668531 106 log.go:172] (0xc000152fd0) (0xc000690000) Create stream\nI0125 10:50:12.668556 106 log.go:172] (0xc000152fd0) (0xc000690000) Stream added, broadcasting: 5\nI0125 10:50:12.669781 106 log.go:172] (0xc000152fd0) Reply frame received for 5\nI0125 10:50:12.788739 106 log.go:172] (0xc000152fd0) Data frame received for 5\nI0125 10:50:12.788856 106 log.go:172] (0xc000690000) (5) Data frame handling\nI0125 10:50:12.788886 106 log.go:172] (0xc000690000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0125 10:50:12.788978 106 log.go:172] (0xc000152fd0) Data frame received for 3\nI0125 10:50:12.789015 106 log.go:172] (0xc000542b40) (3) Data frame handling\nI0125 10:50:12.789036 106 log.go:172] (0xc000542b40) (3) Data frame sent\nI0125 10:50:12.945862 106 log.go:172] (0xc000152fd0) Data frame received for 1\nI0125 10:50:12.946023 106 log.go:172] (0xc000152fd0) (0xc000542b40) Stream removed, broadcasting: 3\nI0125 10:50:12.946094 106 log.go:172] (0xc0005437c0) (1) Data frame handling\nI0125 10:50:12.946120 106 log.go:172] (0xc0005437c0) (1) Data frame sent\nI0125 10:50:12.946163 106 log.go:172] (0xc000152fd0) (0xc000690000) Stream removed, broadcasting: 5\nI0125 10:50:12.946206 106 log.go:172] (0xc000152fd0) (0xc0005437c0) Stream removed, broadcasting: 1\nI0125 10:50:12.946241 106 log.go:172] (0xc000152fd0) Go away received\nI0125 10:50:12.946787 106 log.go:172] (0xc000152fd0) (0xc0005437c0) Stream removed, broadcasting: 1\nI0125 10:50:12.946803 106 log.go:172] (0xc000152fd0) (0xc000542b40) Stream removed, broadcasting: 3\nI0125 10:50:12.946816 106 log.go:172] (0xc000152fd0) (0xc000690000) Stream removed, broadcasting: 5\n" Jan 25 10:50:12.965: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 25 10:50:12.965: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 25 10:50:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:13.474: INFO: stderr: "I0125 10:50:13.191047 127 log.go:172] (0xc00014c630) (0xc000533400) Create stream\nI0125 10:50:13.191252 127 log.go:172] (0xc00014c630) (0xc000533400) Stream added, broadcasting: 1\nI0125 10:50:13.195606 127 log.go:172] (0xc00014c630) Reply frame received for 1\nI0125 10:50:13.195675 127 log.go:172] (0xc00014c630) (0xc0005ce000) Create stream\nI0125 10:50:13.195695 127 log.go:172] (0xc00014c630) (0xc0005ce000) Stream added, broadcasting: 3\nI0125 10:50:13.196586 127 log.go:172] (0xc00014c630) Reply frame received for 3\nI0125 10:50:13.196625 127 log.go:172] (0xc00014c630) (0xc0005334a0) Create stream\nI0125 10:50:13.196633 127 log.go:172] (0xc00014c630) (0xc0005334a0) Stream added, broadcasting: 5\nI0125 10:50:13.197432 127 log.go:172] (0xc00014c630) Reply frame received for 5\nI0125 10:50:13.327193 127 log.go:172] (0xc00014c630) Data frame received for 3\nI0125 10:50:13.327339 127 log.go:172] (0xc0005ce000) (3) Data frame handling\nI0125 10:50:13.327372 127 log.go:172] (0xc0005ce000) (3) Data frame sent\nI0125 10:50:13.327391 127 log.go:172] (0xc00014c630) Data frame received for 5\nI0125 10:50:13.327409 127 log.go:172] (0xc0005334a0) (5) Data frame handling\nI0125 10:50:13.327431 127 log.go:172] (0xc0005334a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0125 10:50:13.464782 127 log.go:172] (0xc00014c630) Data frame received for 1\nI0125 10:50:13.464998 127 log.go:172] (0xc00014c630) (0xc0005ce000) Stream removed, broadcasting: 3\nI0125 10:50:13.465081 127 log.go:172] (0xc000533400) (1) Data frame handling\nI0125 10:50:13.465103 127 log.go:172] (0xc000533400) (1) Data frame sent\nI0125 10:50:13.465219 127 log.go:172] (0xc00014c630) (0xc0005334a0) Stream removed, broadcasting: 5\nI0125 10:50:13.465267 127 log.go:172] (0xc00014c630) (0xc000533400) Stream removed, broadcasting: 1\nI0125 10:50:13.465351 127 log.go:172] (0xc00014c630) Go away received\nI0125 10:50:13.465822 127 log.go:172] (0xc00014c630) (0xc000533400) Stream removed, broadcasting: 1\nI0125 10:50:13.465844 127 log.go:172] (0xc00014c630) (0xc0005ce000) Stream removed, broadcasting: 3\nI0125 10:50:13.465853 127 log.go:172] (0xc00014c630) (0xc0005334a0) Stream removed, broadcasting: 5\n" Jan 25 10:50:13.475: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 25 10:50:13.475: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 25 10:50:13.492: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:50:13.492: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:50:13.492: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 25 10:50:13.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 10:50:14.239: INFO: stderr: "I0125 10:50:13.742842 148 log.go:172] (0xc0008202c0) (0xc0003ec780) Create stream\nI0125 10:50:13.743109 148 log.go:172] (0xc0008202c0) (0xc0003ec780) Stream added, broadcasting: 1\nI0125 10:50:13.751828 148 log.go:172] (0xc0008202c0) Reply frame received for 1\nI0125 10:50:13.751893 148 log.go:172] (0xc0008202c0) (0xc0008a8000) Create stream\nI0125 10:50:13.751920 148 log.go:172] (0xc0008202c0) (0xc0008a8000) Stream added, broadcasting: 3\nI0125 10:50:13.753139 148 log.go:172] (0xc0008202c0) Reply frame received for 3\nI0125 10:50:13.753231 148 log.go:172] (0xc0008202c0) (0xc0008ba000) Create stream\nI0125 10:50:13.753275 148 log.go:172] (0xc0008202c0) (0xc0008ba000) Stream added, broadcasting: 5\nI0125 10:50:13.759424 148 log.go:172] (0xc0008202c0) Reply frame received for 5\nI0125 10:50:14.074175 148 log.go:172] (0xc0008202c0) Data frame received for 3\nI0125 10:50:14.074253 148 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0125 10:50:14.074291 148 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0125 10:50:14.229210 148 log.go:172] (0xc0008202c0) (0xc0008a8000) Stream removed, broadcasting: 3\nI0125 10:50:14.229590 148 log.go:172] (0xc0008202c0) Data frame received for 1\nI0125 10:50:14.229682 148 log.go:172] (0xc0008202c0) (0xc0008ba000) Stream removed, broadcasting: 5\nI0125 10:50:14.229748 148 log.go:172] (0xc0003ec780) (1) Data frame handling\nI0125 10:50:14.229781 148 log.go:172] (0xc0003ec780) (1) Data frame sent\nI0125 10:50:14.229789 148 log.go:172] (0xc0008202c0) (0xc0003ec780) Stream removed, broadcasting: 1\nI0125 10:50:14.229795 148 log.go:172] (0xc0008202c0) Go away received\nI0125 10:50:14.230360 148 log.go:172] (0xc0008202c0) (0xc0003ec780) Stream removed, broadcasting: 1\nI0125 10:50:14.230391 148 log.go:172] (0xc0008202c0) (0xc0008a8000) Stream removed, broadcasting: 3\nI0125 10:50:14.230408 148 log.go:172] (0xc0008202c0) (0xc0008ba000) Stream removed, broadcasting: 5\n" Jan 25 10:50:14.239: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 10:50:14.239: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 25 10:50:14.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 10:50:15.046: INFO: stderr: "I0125 10:50:14.559670 171 log.go:172] (0xc000138630) (0xc00063b4a0) Create stream\nI0125 10:50:14.560061 171 log.go:172] (0xc000138630) (0xc00063b4a0) Stream added, broadcasting: 1\nI0125 10:50:14.661996 171 log.go:172] (0xc000138630) Reply frame received for 1\nI0125 10:50:14.662158 171 log.go:172] (0xc000138630) (0xc00063b540) Create stream\nI0125 10:50:14.662176 171 log.go:172] (0xc000138630) (0xc00063b540) Stream added, broadcasting: 3\nI0125 10:50:14.667732 171 log.go:172] (0xc000138630) Reply frame received for 3\nI0125 10:50:14.667862 171 log.go:172] (0xc000138630) (0xc0001a8000) Create stream\nI0125 10:50:14.667881 171 log.go:172] (0xc000138630) (0xc0001a8000) Stream added, broadcasting: 5\nI0125 10:50:14.672832 171 log.go:172] (0xc000138630) Reply frame received for 5\nI0125 10:50:14.932619 171 log.go:172] (0xc000138630) Data frame received for 3\nI0125 10:50:14.932682 171 log.go:172] (0xc00063b540) (3) Data frame handling\nI0125 10:50:14.932695 171 log.go:172] (0xc00063b540) (3) Data frame sent\nI0125 10:50:15.031850 171 log.go:172] (0xc000138630) (0xc00063b540) Stream removed, broadcasting: 3\nI0125 10:50:15.032231 171 log.go:172] (0xc000138630) Data frame received for 1\nI0125 10:50:15.032309 171 log.go:172] (0xc00063b4a0) (1) Data frame handling\nI0125 10:50:15.032357 171 log.go:172] (0xc00063b4a0) (1) Data frame sent\nI0125 10:50:15.032493 171 log.go:172] (0xc000138630) (0xc0001a8000) Stream removed, broadcasting: 5\nI0125 10:50:15.032606 171 log.go:172] (0xc000138630) (0xc00063b4a0) Stream removed, broadcasting: 1\nI0125 10:50:15.032638 171 log.go:172] (0xc000138630) Go away received\nI0125 10:50:15.034145 171 log.go:172] (0xc000138630) (0xc00063b4a0) Stream removed, broadcasting: 1\nI0125 10:50:15.034179 171 log.go:172] (0xc000138630) (0xc00063b540) Stream removed, broadcasting: 3\nI0125 10:50:15.034217 171 log.go:172] (0xc000138630) (0xc0001a8000) Stream removed, broadcasting: 5\n" Jan 25 10:50:15.047: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 10:50:15.047: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 25 10:50:15.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 10:50:15.763: INFO: stderr: "I0125 10:50:15.226608 193 log.go:172] (0xc000138790) (0xc000754640) Create stream\nI0125 10:50:15.226818 193 log.go:172] (0xc000138790) (0xc000754640) Stream added, broadcasting: 1\nI0125 10:50:15.233868 193 log.go:172] (0xc000138790) Reply frame received for 1\nI0125 10:50:15.233910 193 log.go:172] (0xc000138790) (0xc0006eac80) Create stream\nI0125 10:50:15.233942 193 log.go:172] (0xc000138790) (0xc0006eac80) Stream added, broadcasting: 3\nI0125 10:50:15.235082 193 log.go:172] (0xc000138790) Reply frame received for 3\nI0125 10:50:15.235108 193 log.go:172] (0xc000138790) (0xc0007ac000) Create stream\nI0125 10:50:15.235116 193 log.go:172] (0xc000138790) (0xc0007ac000) Stream added, broadcasting: 5\nI0125 10:50:15.238215 193 log.go:172] (0xc000138790) Reply frame received for 5\nI0125 10:50:15.629381 193 log.go:172] (0xc000138790) Data frame received for 3\nI0125 10:50:15.629435 193 log.go:172] (0xc0006eac80) (3) Data frame handling\nI0125 10:50:15.629449 193 log.go:172] (0xc0006eac80) (3) Data frame sent\nI0125 10:50:15.751780 193 log.go:172] (0xc000138790) (0xc0006eac80) Stream removed, broadcasting: 3\nI0125 10:50:15.752033 193 log.go:172] (0xc000138790) Data frame received for 1\nI0125 10:50:15.752068 193 log.go:172] (0xc000754640) (1) Data frame handling\nI0125 10:50:15.752095 193 log.go:172] (0xc000754640) (1) Data frame sent\nI0125 10:50:15.752129 193 log.go:172] (0xc000138790) (0xc0007ac000) Stream removed, broadcasting: 5\nI0125 10:50:15.752204 193 log.go:172] (0xc000138790) (0xc000754640) Stream removed, broadcasting: 1\nI0125 10:50:15.752327 193 log.go:172] (0xc000138790) Go away received\nI0125 10:50:15.752717 193 log.go:172] (0xc000138790) (0xc000754640) Stream removed, broadcasting: 1\nI0125 10:50:15.752796 193 log.go:172] (0xc000138790) (0xc0006eac80) Stream removed, broadcasting: 3\nI0125 10:50:15.752842 193 log.go:172] (0xc000138790) (0xc0007ac000) Stream removed, broadcasting: 5\n" Jan 25 10:50:15.763: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 10:50:15.763: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 25 10:50:15.763: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 10:50:15.838: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 25 10:50:25.873: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 25 10:50:25.873: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 25 10:50:25.873: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 25 10:50:25.905: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:25.905: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:25.906: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:25.906: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:25.906: INFO: Jan 25 10:50:25.906: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:26.919: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:26.919: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:26.919: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:26.919: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:26.919: INFO: Jan 25 10:50:26.919: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:29.068: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:29.068: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:29.068: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:29.068: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:29.068: INFO: Jan 25 10:50:29.068: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:30.175: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:30.176: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:30.176: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:30.176: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:30.176: INFO: Jan 25 10:50:30.176: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:32.965: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:32.966: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:32.966: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:32.966: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:32.966: INFO: Jan 25 10:50:32.966: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:34.647: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:34.647: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:34.647: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:34.647: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:34.647: INFO: Jan 25 10:50:34.647: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 25 10:50:36.389: INFO: POD NODE PHASE GRACE CONDITIONS Jan 25 10:50:36.390: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:13 +0000 UTC }] Jan 25 10:50:36.390: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:36.390: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:49:45 +0000 UTC }] Jan 25 10:50:36.390: INFO: Jan 25 10:50:36.390: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-kh6zm Jan 25 10:50:37.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:37.841: INFO: rc: 1 Jan 25 10:50:37.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00100d980 exit status 1 true [0xc0008140a0 0xc0008140b8 0xc0008140d0] [0xc0008140a0 0xc0008140b8 0xc0008140d0] [0xc0008140b0 0xc0008140c8] [0x935700 0x935700] 0xc001342cc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 25 10:50:47.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:48.025: INFO: rc: 1 Jan 25 10:50:48.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100daa0 exit status 1 true [0xc0008140d8 0xc0008140f0 0xc000814108] [0xc0008140d8 0xc0008140f0 0xc000814108] [0xc0008140e8 0xc000814100] [0x935700 0x935700] 0xc001342f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:50:58.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:50:58.231: INFO: rc: 1 Jan 25 10:50:58.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c870 exit status 1 true [0xc000bca0b8 0xc000bca0e0 0xc000bca138] [0xc000bca0b8 0xc000bca0e0 0xc000bca138] [0xc000bca0d0 0xc000bca120] [0x935700 0x935700] 0xc0008a2600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:51:08.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:51:09.525: INFO: rc: 1 Jan 25 10:51:09.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009fc5a0 exit status 1 true [0xc00162a120 0xc00162a138 0xc00162a150] [0xc00162a120 0xc00162a138 0xc00162a150] [0xc00162a130 0xc00162a148] [0x935700 0x935700] 0xc001838ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:51:19.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:51:19.617: INFO: rc: 1 Jan 25 10:51:19.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0009fc780 exit status 1 true [0xc00162a158 0xc00162a170 0xc00162a188] [0xc00162a158 0xc00162a170 0xc00162a188] [0xc00162a168 0xc00162a180] [0x935700 0x935700] 0xc001838d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:51:29.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:51:29.828: INFO: rc: 1 Jan 25 10:51:29.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100dbf0 exit status 1 true [0xc000814110 0xc000814128 0xc000814140] [0xc000814110 0xc000814128 0xc000814140] [0xc000814120 0xc000814138] [0x935700 0x935700] 0xc001343200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:51:39.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:51:40.014: INFO: rc: 1 Jan 25 10:51:40.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c9c0 exit status 1 true [0xc000bca188 0xc000bca1c8 0xc000bca250] [0xc000bca188 0xc000bca1c8 0xc000bca250] [0xc000bca1b8 0xc000bca220] [0x935700 0x935700] 0xc0008a29c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:51:50.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:51:50.142: INFO: rc: 1 Jan 25 10:51:50.143: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100dd40 exit status 1 true [0xc000814148 0xc000814160 0xc000814178] [0xc000814148 0xc000814160 0xc000814178] [0xc000814158 0xc000814170] [0x935700 0x935700] 0xc0013434a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:00.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:00.282: INFO: rc: 1 Jan 25 10:52:00.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c0f0 exit status 1 true [0xc000ffa008 0xc000ffa020 0xc000ffa038] [0xc000ffa008 0xc000ffa020 0xc000ffa038] [0xc000ffa018 0xc000ffa030] [0x935700 0x935700] 0xc0018ac1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:10.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:10.442: INFO: rc: 1 Jan 25 10:52:10.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d6120 exit status 1 true [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430c10 0xc000430e10] [0x935700 0x935700] 0xc0019521e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:20.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:20.605: INFO: rc: 1 Jan 25 10:52:20.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c270 exit status 1 true [0xc0000e8238 0xc000ffa050 0xc000ffa068] [0xc0000e8238 0xc000ffa050 0xc000ffa068] [0xc000ffa048 0xc000ffa060] [0x935700 0x935700] 0xc0018ac4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:30.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:30.851: INFO: rc: 1 Jan 25 10:52:30.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c390 exit status 1 true [0xc000ffa070 0xc000ffa088 0xc000ffa0a0] [0xc000ffa070 0xc000ffa088 0xc000ffa0a0] [0xc000ffa080 0xc000ffa098] [0x935700 0x935700] 0xc0018ac7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:40.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:41.067: INFO: rc: 1 Jan 25 10:52:41.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c180 exit status 1 true [0xc000bca008 0xc000bca078 0xc000bca0c8] [0xc000bca008 0xc000bca078 0xc000bca0c8] [0xc000bca070 0xc000bca0b8] [0x935700 0x935700] 0xc0008a2420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:52:51.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:52:51.257: INFO: rc: 1 Jan 25 10:52:51.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d62d0 exit status 1 true [0xc000430fc8 0xc000431030 0xc0004311a0] [0xc000430fc8 0xc000431030 0xc0004311a0] [0xc000431008 0xc0004310e0] [0x935700 0x935700] 0xc001952480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:01.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:01.373: INFO: rc: 1 Jan 25 10:53:01.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d6450 exit status 1 true [0xc0004311b8 0xc000431328 0xc000431548] [0xc0004311b8 0xc000431328 0xc000431548] [0xc0004312b0 0xc000431510] [0x935700 0x935700] 0xc001952720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:11.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:11.493: INFO: rc: 1 Jan 25 10:53:11.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c540 exit status 1 true [0xc000ffa0a8 0xc000ffa0c0 0xc000ffa0d8] [0xc000ffa0a8 0xc000ffa0c0 0xc000ffa0d8] [0xc000ffa0b8 0xc000ffa0d0] [0x935700 0x935700] 0xc0018aca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:21.669: INFO: rc: 1 Jan 25 10:53:21.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c2a0 exit status 1 true [0xc000bca0d0 0xc000bca120 0xc000bca198] [0xc000bca0d0 0xc000bca120 0xc000bca198] [0xc000bca0e8 0xc000bca188] [0x935700 0x935700] 0xc0008a26c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:31.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:31.804: INFO: rc: 1 Jan 25 10:53:31.805: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c6c0 exit status 1 true [0xc000ffa0e0 0xc000ffa0f8 0xc000ffa110] [0xc000ffa0e0 0xc000ffa0f8 0xc000ffa110] [0xc000ffa0f0 0xc000ffa108] [0x935700 0x935700] 0xc0018acd20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:41.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:42.002: INFO: rc: 1 Jan 25 10:53:42.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d65d0 exit status 1 true [0xc000431698 0xc0004318d0 0xc000431a58] [0xc000431698 0xc0004318d0 0xc000431a58] [0xc0004317d0 0xc000431a40] [0x935700 0x935700] 0xc0019529c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:53:52.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:53:52.173: INFO: rc: 1 Jan 25 10:53:52.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d66f0 exit status 1 true [0xc000431ab8 0xc000431b98 0xc000431c80] [0xc000431ab8 0xc000431b98 0xc000431c80] [0xc000431ae8 0xc000431c10] [0x935700 0x935700] 0xc001953500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:02.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:02.400: INFO: rc: 1 Jan 25 10:54:02.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c1b0 exit status 1 true [0xc000bca008 0xc000bca078 0xc000bca0c8] [0xc000bca008 0xc000bca078 0xc000bca0c8] [0xc000bca070 0xc000bca0b8] [0x935700 0x935700] 0xc0008a2420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:12.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:12.605: INFO: rc: 1 Jan 25 10:54:12.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c330 exit status 1 true [0xc000bca0d0 0xc000bca120 0xc000bca198] [0xc000bca0d0 0xc000bca120 0xc000bca198] [0xc000bca0e8 0xc000bca188] [0x935700 0x935700] 0xc0008a26c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:22.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:22.940: INFO: rc: 1 Jan 25 10:54:22.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c180 exit status 1 true [0xc000ffa000 0xc000ffa018 0xc000ffa030] [0xc000ffa000 0xc000ffa018 0xc000ffa030] [0xc000ffa010 0xc000ffa028] [0x935700 0x935700] 0xc0018ac1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:32.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:33.133: INFO: rc: 1 Jan 25 10:54:33.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00152c450 exit status 1 true [0xc000bca1b8 0xc000bca220 0xc000bca280] [0xc000bca1b8 0xc000bca220 0xc000bca280] [0xc000bca1f0 0xc000bca268] [0x935700 0x935700] 0xc0008a2a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:43.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:43.276: INFO: rc: 1 Jan 25 10:54:43.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d6150 exit status 1 true [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430bd8 0xc000430d00 0xc000430e90] [0xc000430c10 0xc000430e10] [0x935700 0x935700] 0xc0019521e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:54:53.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:54:53.494: INFO: rc: 1 Jan 25 10:54:53.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015d6270 exit status 1 true [0xc000430fc8 0xc000431030 0xc0004311a0] [0xc000430fc8 0xc000431030 0xc0004311a0] [0xc000431008 0xc0004310e0] [0x935700 0x935700] 0xc001952480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:55:03.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:55:03.722: INFO: rc: 1 Jan 25 10:55:03.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100c150 exit status 1 true [0xc000814000 0xc000814018 0xc000814030] [0xc000814000 0xc000814018 0xc000814030] [0xc000814010 0xc000814028] [0x935700 0x935700] 0xc001342660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:55:13.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:55:13.915: INFO: rc: 1 Jan 25 10:55:13.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100c8a0 exit status 1 true [0xc000814038 0xc000814050 0xc000814068] [0xc000814038 0xc000814050 0xc000814068] [0xc000814048 0xc000814060] [0x935700 0x935700] 0xc001342900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:55:23.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:55:24.120: INFO: rc: 1 Jan 25 10:55:24.120: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00100c9f0 exit status 1 true [0xc000814070 0xc000814088 0xc0008140a0] [0xc000814070 0xc000814088 0xc0008140a0] [0xc000814080 0xc000814098] [0x935700 0x935700] 0xc001342d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:55:34.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:55:34.215: INFO: rc: 1 Jan 25 10:55:34.215: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00155c330 exit status 1 true [0xc000ffa038 0xc000ffa050 0xc000ffa068] [0xc000ffa038 0xc000ffa050 0xc000ffa068] [0xc000ffa048 0xc000ffa060] [0x935700 0x935700] 0xc0018ac4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 25 10:55:44.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kh6zm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 10:55:44.362: INFO: rc: 1 Jan 25 10:55:44.362: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 25 10:55:44.362: INFO: Scaling statefulset ss to 0 Jan 25 10:55:44.425: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 25 10:55:44.430: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kh6zm Jan 25 10:55:44.435: INFO: Scaling statefulset ss to 0 Jan 25 10:55:44.465: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 10:55:44.471: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:55:44.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-kh6zm" for this suite. Jan 25 10:55:52.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:55:52.751: INFO: namespace: e2e-tests-statefulset-kh6zm, resource: bindings, ignored listing per whitelist Jan 25 10:55:52.877: INFO: namespace e2e-tests-statefulset-kh6zm deletion completed in 8.25555159s • [SLOW TEST:399.593 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:55:52.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 10:55:53.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-5q4kz" to be "success or failure" Jan 25 10:55:53.168: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.587624ms Jan 25 10:55:55.441: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291111403s Jan 25 10:55:57.455: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305038257s Jan 25 10:56:00.481: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.331254124s Jan 25 10:56:02.508: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.35750783s Jan 25 10:56:04.592: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.441719254s Jan 25 10:56:06.610: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.459992336s STEP: Saw pod success Jan 25 10:56:06.611: INFO: Pod "downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 10:56:06.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 10:56:07.155: INFO: Waiting for pod downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006 to disappear Jan 25 10:56:07.168: INFO: Pod downwardapi-volume-417e10b2-3f61-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:56:07.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5q4kz" for this suite. Jan 25 10:56:14.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:56:15.135: INFO: namespace: e2e-tests-projected-5q4kz, resource: bindings, ignored listing per whitelist Jan 25 10:56:15.229: INFO: namespace e2e-tests-projected-5q4kz deletion completed in 8.05288918s • [SLOW TEST:22.352 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:56:15.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 25 10:56:15.464: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 25 10:56:20.494: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:56:20.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-kkvxs" for this suite. Jan 25 10:56:29.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:56:29.978: INFO: namespace: e2e-tests-replication-controller-kkvxs, resource: bindings, ignored listing per whitelist Jan 25 10:56:30.106: INFO: namespace e2e-tests-replication-controller-kkvxs deletion completed in 9.292484104s • [SLOW TEST:14.876 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:56:30.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 25 10:56:30.953: INFO: created pod pod-service-account-defaultsa Jan 25 10:56:30.953: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 25 10:56:30.981: INFO: created pod pod-service-account-mountsa Jan 25 10:56:30.981: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 25 10:56:30.996: INFO: created pod pod-service-account-nomountsa Jan 25 10:56:30.996: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 25 10:56:31.028: INFO: created pod pod-service-account-defaultsa-mountspec Jan 25 10:56:31.028: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 25 10:56:31.167: INFO: created pod pod-service-account-mountsa-mountspec Jan 25 10:56:31.167: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 25 10:56:31.195: INFO: created pod pod-service-account-nomountsa-mountspec Jan 25 10:56:31.195: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 25 10:56:31.240: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 25 10:56:31.240: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 25 10:56:31.265: INFO: created pod pod-service-account-mountsa-nomountspec Jan 25 10:56:31.265: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 25 10:56:31.415: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 25 10:56:31.415: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:56:31.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-9vnrn" for this suite. Jan 25 10:57:27.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:57:27.748: INFO: namespace: e2e-tests-svcaccounts-9vnrn, resource: bindings, ignored listing per whitelist Jan 25 10:57:27.782: INFO: namespace e2e-tests-svcaccounts-9vnrn deletion completed in 56.332517111s • [SLOW TEST:57.676 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:57:27.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 10:57:28.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-kgcx2" to be "success or failure" Jan 25 10:57:28.228: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 127.991049ms Jan 25 10:57:30.438: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337253031s Jan 25 10:57:32.459: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358458731s Jan 25 10:57:34.480: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380187199s Jan 25 10:57:38.047: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.94693292s Jan 25 10:57:41.824: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.723787711s Jan 25 10:57:43.846: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.745263611s Jan 25 10:57:45.863: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.762612508s Jan 25 10:57:47.898: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.797729486s STEP: Saw pod success Jan 25 10:57:47.898: INFO: Pod "downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 10:57:47.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 10:57:49.635: INFO: Waiting for pod downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006 to disappear Jan 25 10:57:49.660: INFO: Pod downwardapi-volume-7a12a960-3f61-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:57:49.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kgcx2" for this suite. Jan 25 10:57:55.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:57:55.953: INFO: namespace: e2e-tests-projected-kgcx2, resource: bindings, ignored listing per whitelist Jan 25 10:57:56.053: INFO: namespace e2e-tests-projected-kgcx2 deletion completed in 6.384076802s • [SLOW TEST:28.270 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:57:56.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 25 10:57:56.279: INFO: Waiting up to 5m0s for pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006" in namespace "e2e-tests-var-expansion-8rx5n" to be "success or failure" Jan 25 10:57:56.315: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 34.913428ms Jan 25 10:57:58.456: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176312895s Jan 25 10:58:01.379: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.099062792s Jan 25 10:58:03.424: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14456296s Jan 25 10:58:05.448: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.168150328s Jan 25 10:58:07.461: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.181293054s STEP: Saw pod success Jan 25 10:58:07.461: INFO: Pod "var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 10:58:07.467: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006 container dapi-container: STEP: delete the pod Jan 25 10:58:08.175: INFO: Waiting for pod var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006 to disappear Jan 25 10:58:08.185: INFO: Pod var-expansion-8ae045d3-3f61-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:58:08.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8rx5n" for this suite. Jan 25 10:58:16.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:58:16.339: INFO: namespace: e2e-tests-var-expansion-8rx5n, resource: bindings, ignored listing per whitelist Jan 25 10:58:16.516: INFO: namespace e2e-tests-var-expansion-8rx5n deletion completed in 8.322772039s • [SLOW TEST:20.463 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:58:16.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 25 10:58:16.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cx7dh' Jan 25 10:58:19.301: INFO: stderr: "" Jan 25 10:58:19.302: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 25 10:58:20.673: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:20.673: INFO: Found 0 / 1 Jan 25 10:58:21.693: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:21.693: INFO: Found 0 / 1 Jan 25 10:58:22.315: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:22.315: INFO: Found 0 / 1 Jan 25 10:58:23.381: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:23.381: INFO: Found 0 / 1 Jan 25 10:58:25.443: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:25.443: INFO: Found 0 / 1 Jan 25 10:58:26.519: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:26.519: INFO: Found 0 / 1 Jan 25 10:58:27.318: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:27.318: INFO: Found 0 / 1 Jan 25 10:58:28.320: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:28.320: INFO: Found 0 / 1 Jan 25 10:58:29.414: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:29.414: INFO: Found 1 / 1 Jan 25 10:58:29.414: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 25 10:58:29.420: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:29.420: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 25 10:58:29.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-p4flf --namespace=e2e-tests-kubectl-cx7dh -p {"metadata":{"annotations":{"x":"y"}}}' Jan 25 10:58:29.744: INFO: stderr: "" Jan 25 10:58:29.744: INFO: stdout: "pod/redis-master-p4flf patched\n" STEP: checking annotations Jan 25 10:58:29.755: INFO: Selector matched 1 pods for map[app:redis] Jan 25 10:58:29.755: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 10:58:29.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cx7dh" for this suite. Jan 25 10:58:51.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 10:58:51.877: INFO: namespace: e2e-tests-kubectl-cx7dh, resource: bindings, ignored listing per whitelist Jan 25 10:58:51.969: INFO: namespace e2e-tests-kubectl-cx7dh deletion completed in 22.208087129s • [SLOW TEST:35.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 10:58:51.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pz8mf [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 25 10:58:52.343: INFO: Found 0 stateful pods, waiting for 3 Jan 25 10:59:02.570: INFO: Found 2 stateful pods, waiting for 3 Jan 25 10:59:12.361: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:59:12.362: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:59:12.362: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 10:59:22.365: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:59:22.365: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 10:59:22.365: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 25 10:59:22.432: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 25 10:59:32.603: INFO: Updating stateful set ss2 Jan 25 10:59:32.636: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 25 10:59:43.130: INFO: Found 1 stateful pods, waiting for 3 Jan 25 10:59:53.229: INFO: Found 2 stateful pods, waiting for 3 Jan 25 11:00:03.187: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:03.187: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:03.187: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 11:00:13.159: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:13.160: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:13.160: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 11:00:23.162: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:23.163: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:00:23.163: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 25 11:00:23.270: INFO: Updating stateful set ss2 Jan 25 11:00:23.292: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:00:33.317: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:00:43.389: INFO: Updating stateful set ss2 Jan 25 11:00:43.662: INFO: Waiting for StatefulSet e2e-tests-statefulset-pz8mf/ss2 to complete update Jan 25 11:00:43.663: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:00:54.054: INFO: Waiting for StatefulSet e2e-tests-statefulset-pz8mf/ss2 to complete update Jan 25 11:00:54.054: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:01:03.710: INFO: Waiting for StatefulSet e2e-tests-statefulset-pz8mf/ss2 to complete update Jan 25 11:01:03.710: INFO: Waiting for Pod e2e-tests-statefulset-pz8mf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:01:13.801: INFO: Waiting for StatefulSet e2e-tests-statefulset-pz8mf/ss2 to complete update Jan 25 11:01:24.435: INFO: Waiting for StatefulSet e2e-tests-statefulset-pz8mf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 25 11:01:33.692: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pz8mf Jan 25 11:01:33.701: INFO: Scaling statefulset ss2 to 0 Jan 25 11:02:03.824: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 11:02:03.838: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:02:04.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pz8mf" for this suite. Jan 25 11:02:12.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:02:12.526: INFO: namespace: e2e-tests-statefulset-pz8mf, resource: bindings, ignored listing per whitelist Jan 25 11:02:12.804: INFO: namespace e2e-tests-statefulset-pz8mf deletion completed in 8.701194837s • [SLOW TEST:200.835 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:02:12.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 11:02:13.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-js5sk" to be "success or failure" Jan 25 11:02:13.196: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.512016ms Jan 25 11:02:16.233: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054013813s Jan 25 11:02:18.281: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.102163802s Jan 25 11:02:20.297: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.118237982s Jan 25 11:02:24.325: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.146797255s Jan 25 11:02:26.396: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.21718708s Jan 25 11:02:28.419: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.240370231s Jan 25 11:02:30.435: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.256379412s STEP: Saw pod success Jan 25 11:02:30.435: INFO: Pod "downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:02:30.443: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 11:02:30.593: INFO: Waiting for pod downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006 to disappear Jan 25 11:02:30.777: INFO: Pod downwardapi-volume-23fed1a8-3f62-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:02:30.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-js5sk" for this suite. Jan 25 11:02:36.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:02:37.165: INFO: namespace: e2e-tests-projected-js5sk, resource: bindings, ignored listing per whitelist Jan 25 11:02:37.169: INFO: namespace e2e-tests-projected-js5sk deletion completed in 6.342596525s • [SLOW TEST:24.365 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:02:37.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-z5b2v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z5b2v to expose endpoints map[] Jan 25 11:02:37.494: INFO: Get endpoints failed (20.80578ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 25 11:02:38.530: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z5b2v exposes endpoints map[] (1.057073612s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-z5b2v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z5b2v to expose endpoints map[pod1:[100]] Jan 25 11:02:42.911: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.298742713s elapsed, will retry) Jan 25 11:02:49.607: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z5b2v exposes endpoints map[pod1:[100]] (10.994563261s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-z5b2v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z5b2v to expose endpoints map[pod1:[100] pod2:[101]] Jan 25 11:02:55.387: INFO: Unexpected endpoints: found map[3325b4d4-3f62-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.746809781s elapsed, will retry) Jan 25 11:02:58.510: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z5b2v exposes endpoints map[pod1:[100] pod2:[101]] (8.870271723s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-z5b2v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z5b2v to expose endpoints map[pod2:[101]] Jan 25 11:02:59.853: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z5b2v exposes endpoints map[pod2:[101]] (1.302956419s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-z5b2v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z5b2v to expose endpoints map[] Jan 25 11:03:01.215: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z5b2v exposes endpoints map[] (1.348331235s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:03:01.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-z5b2v" for this suite. Jan 25 11:03:25.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:03:25.427: INFO: namespace: e2e-tests-services-z5b2v, resource: bindings, ignored listing per whitelist Jan 25 11:03:25.515: INFO: namespace e2e-tests-services-z5b2v deletion completed in 24.229058095s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.346 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:03:25.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-twgd STEP: Creating a pod to test atomic-volume-subpath Jan 25 11:03:25.927: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-twgd" in namespace "e2e-tests-subpath-ftbpv" to be "success or failure" Jan 25 11:03:25.954: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.967638ms Jan 25 11:03:28.620: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693038693s Jan 25 11:03:30.640: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712673548s Jan 25 11:03:32.688: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.761196029s Jan 25 11:03:34.790: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862748545s Jan 25 11:03:36.806: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.87875677s Jan 25 11:03:38.833: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.906016902s Jan 25 11:03:41.112: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.185237767s Jan 25 11:03:43.271: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.343820474s Jan 25 11:03:45.404: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.476336644s Jan 25 11:03:47.668: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.740556426s Jan 25 11:03:49.699: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.772269628s Jan 25 11:03:51.709: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 25.781497911s Jan 25 11:03:53.732: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 27.804854033s Jan 25 11:03:55.752: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 29.82528703s Jan 25 11:03:57.785: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 31.858168783s Jan 25 11:03:59.809: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 33.881749786s Jan 25 11:04:01.852: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 35.92489151s Jan 25 11:04:03.884: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 37.956981128s Jan 25 11:04:05.902: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 39.974448861s Jan 25 11:04:07.922: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Running", Reason="", readiness=false. Elapsed: 41.994446757s Jan 25 11:04:09.933: INFO: Pod "pod-subpath-test-projected-twgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.005732396s STEP: Saw pod success Jan 25 11:04:09.933: INFO: Pod "pod-subpath-test-projected-twgd" satisfied condition "success or failure" Jan 25 11:04:09.937: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-twgd container test-container-subpath-projected-twgd: STEP: delete the pod Jan 25 11:04:11.835: INFO: Waiting for pod pod-subpath-test-projected-twgd to disappear Jan 25 11:04:12.048: INFO: Pod pod-subpath-test-projected-twgd no longer exists STEP: Deleting pod pod-subpath-test-projected-twgd Jan 25 11:04:12.048: INFO: Deleting pod "pod-subpath-test-projected-twgd" in namespace "e2e-tests-subpath-ftbpv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:04:12.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ftbpv" for this suite. Jan 25 11:04:20.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:04:20.624: INFO: namespace: e2e-tests-subpath-ftbpv, resource: bindings, ignored listing per whitelist Jan 25 11:04:20.719: INFO: namespace e2e-tests-subpath-ftbpv deletion completed in 8.6461855s • [SLOW TEST:55.204 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:04:20.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7065d333-3f62-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:04:21.393: INFO: Waiting up to 5m0s for pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-gnmsv" to be "success or failure" Jan 25 11:04:21.475: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 81.1601ms Jan 25 11:04:24.546: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152257593s Jan 25 11:04:26.595: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20175268s Jan 25 11:04:28.620: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.226225031s Jan 25 11:04:30.647: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.253119187s Jan 25 11:04:32.766: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.372630965s Jan 25 11:04:34.869: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.475925707s Jan 25 11:04:36.902: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.507978013s STEP: Saw pod success Jan 25 11:04:36.902: INFO: Pod "pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:04:36.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:04:38.120: INFO: Waiting for pod pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006 to disappear Jan 25 11:04:38.198: INFO: Pod pod-configmaps-7067992e-3f62-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:04:38.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gnmsv" for this suite. Jan 25 11:04:44.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:04:44.569: INFO: namespace: e2e-tests-configmap-gnmsv, resource: bindings, ignored listing per whitelist Jan 25 11:04:44.613: INFO: namespace e2e-tests-configmap-gnmsv deletion completed in 6.306260277s • [SLOW TEST:23.893 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:04:44.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-65krp STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-65krp STEP: Deleting pre-stop pod Jan 25 11:05:05.921: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:05:05.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-65krp" for this suite. Jan 25 11:05:46.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:05:46.242: INFO: namespace: e2e-tests-prestop-65krp, resource: bindings, ignored listing per whitelist Jan 25 11:05:46.296: INFO: namespace e2e-tests-prestop-65krp deletion completed in 40.317236508s • [SLOW TEST:61.682 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:05:46.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5n4gk [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 25 11:05:46.597: INFO: Found 0 stateful pods, waiting for 3 Jan 25 11:05:56.619: INFO: Found 1 stateful pods, waiting for 3 Jan 25 11:06:06.762: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:06:06.762: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:06:06.762: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 11:06:16.615: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:06:16.615: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:06:16.615: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 25 11:06:16.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5n4gk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 11:06:17.277: INFO: stderr: "I0125 11:06:16.877770 888 log.go:172] (0xc00074a370) (0xc000772640) Create stream\nI0125 11:06:16.877973 888 log.go:172] (0xc00074a370) (0xc000772640) Stream added, broadcasting: 1\nI0125 11:06:16.885126 888 log.go:172] (0xc00074a370) Reply frame received for 1\nI0125 11:06:16.885165 888 log.go:172] (0xc00074a370) (0xc0007726e0) Create stream\nI0125 11:06:16.885173 888 log.go:172] (0xc00074a370) (0xc0007726e0) Stream added, broadcasting: 3\nI0125 11:06:16.887468 888 log.go:172] (0xc00074a370) Reply frame received for 3\nI0125 11:06:16.887501 888 log.go:172] (0xc00074a370) (0xc0005a2d20) Create stream\nI0125 11:06:16.887517 888 log.go:172] (0xc00074a370) (0xc0005a2d20) Stream added, broadcasting: 5\nI0125 11:06:16.889832 888 log.go:172] (0xc00074a370) Reply frame received for 5\nI0125 11:06:17.077342 888 log.go:172] (0xc00074a370) Data frame received for 3\nI0125 11:06:17.077410 888 log.go:172] (0xc0007726e0) (3) Data frame handling\nI0125 11:06:17.077450 888 log.go:172] (0xc0007726e0) (3) Data frame sent\nI0125 11:06:17.265669 888 log.go:172] (0xc00074a370) Data frame received for 1\nI0125 11:06:17.265756 888 log.go:172] (0xc00074a370) (0xc0007726e0) Stream removed, broadcasting: 3\nI0125 11:06:17.265838 888 log.go:172] (0xc000772640) (1) Data frame handling\nI0125 11:06:17.265865 888 log.go:172] (0xc000772640) (1) Data frame sent\nI0125 11:06:17.265876 888 log.go:172] (0xc00074a370) (0xc000772640) Stream removed, broadcasting: 1\nI0125 11:06:17.265893 888 log.go:172] (0xc00074a370) (0xc0005a2d20) Stream removed, broadcasting: 5\nI0125 11:06:17.265907 888 log.go:172] (0xc00074a370) Go away received\nI0125 11:06:17.266218 888 log.go:172] (0xc00074a370) (0xc000772640) Stream removed, broadcasting: 1\nI0125 11:06:17.266242 888 log.go:172] (0xc00074a370) (0xc0007726e0) Stream removed, broadcasting: 3\nI0125 11:06:17.266253 888 log.go:172] (0xc00074a370) (0xc0005a2d20) Stream removed, broadcasting: 5\n" Jan 25 11:06:17.277: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 11:06:17.277: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 25 11:06:17.489: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 25 11:06:27.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5n4gk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 11:06:28.501: INFO: stderr: "I0125 11:06:28.089828 910 log.go:172] (0xc0006ae0b0) (0xc0006de5a0) Create stream\nI0125 11:06:28.090091 910 log.go:172] (0xc0006ae0b0) (0xc0006de5a0) Stream added, broadcasting: 1\nI0125 11:06:28.097879 910 log.go:172] (0xc0006ae0b0) Reply frame received for 1\nI0125 11:06:28.097913 910 log.go:172] (0xc0006ae0b0) (0xc0008780a0) Create stream\nI0125 11:06:28.097921 910 log.go:172] (0xc0006ae0b0) (0xc0008780a0) Stream added, broadcasting: 3\nI0125 11:06:28.098895 910 log.go:172] (0xc0006ae0b0) Reply frame received for 3\nI0125 11:06:28.098931 910 log.go:172] (0xc0006ae0b0) (0xc0000e8d20) Create stream\nI0125 11:06:28.098945 910 log.go:172] (0xc0006ae0b0) (0xc0000e8d20) Stream added, broadcasting: 5\nI0125 11:06:28.099988 910 log.go:172] (0xc0006ae0b0) Reply frame received for 5\nI0125 11:06:28.204573 910 log.go:172] (0xc0006ae0b0) Data frame received for 3\nI0125 11:06:28.204687 910 log.go:172] (0xc0008780a0) (3) Data frame handling\nI0125 11:06:28.204715 910 log.go:172] (0xc0008780a0) (3) Data frame sent\nI0125 11:06:28.488374 910 log.go:172] (0xc0006ae0b0) (0xc0008780a0) Stream removed, broadcasting: 3\nI0125 11:06:28.488802 910 log.go:172] (0xc0006ae0b0) Data frame received for 1\nI0125 11:06:28.488991 910 log.go:172] (0xc0006ae0b0) (0xc0000e8d20) Stream removed, broadcasting: 5\nI0125 11:06:28.489089 910 log.go:172] (0xc0006de5a0) (1) Data frame handling\nI0125 11:06:28.489111 910 log.go:172] (0xc0006de5a0) (1) Data frame sent\nI0125 11:06:28.489126 910 log.go:172] (0xc0006ae0b0) (0xc0006de5a0) Stream removed, broadcasting: 1\nI0125 11:06:28.489172 910 log.go:172] (0xc0006ae0b0) Go away received\nI0125 11:06:28.489701 910 log.go:172] (0xc0006ae0b0) (0xc0006de5a0) Stream removed, broadcasting: 1\nI0125 11:06:28.489726 910 log.go:172] (0xc0006ae0b0) (0xc0008780a0) Stream removed, broadcasting: 3\nI0125 11:06:28.489739 910 log.go:172] (0xc0006ae0b0) (0xc0000e8d20) Stream removed, broadcasting: 5\n" Jan 25 11:06:28.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 25 11:06:28.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 25 11:06:28.680: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:06:28.681: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:28.681: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:28.681: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:38.802: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:06:38.803: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:38.803: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:48.711: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:06:48.711: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:48.711: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:06:59.916: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:06:59.917: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:07:08.699: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:07:08.699: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 25 11:07:18.775: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update STEP: Rolling back to a previous revision Jan 25 11:07:28.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5n4gk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 25 11:07:29.434: INFO: stderr: "I0125 11:07:28.959435 932 log.go:172] (0xc000700370) (0xc000740640) Create stream\nI0125 11:07:28.959666 932 log.go:172] (0xc000700370) (0xc000740640) Stream added, broadcasting: 1\nI0125 11:07:28.967215 932 log.go:172] (0xc000700370) Reply frame received for 1\nI0125 11:07:28.967262 932 log.go:172] (0xc000700370) (0xc0007406e0) Create stream\nI0125 11:07:28.967278 932 log.go:172] (0xc000700370) (0xc0007406e0) Stream added, broadcasting: 3\nI0125 11:07:28.969030 932 log.go:172] (0xc000700370) Reply frame received for 3\nI0125 11:07:28.969054 932 log.go:172] (0xc000700370) (0xc000740780) Create stream\nI0125 11:07:28.969063 932 log.go:172] (0xc000700370) (0xc000740780) Stream added, broadcasting: 5\nI0125 11:07:28.970235 932 log.go:172] (0xc000700370) Reply frame received for 5\nI0125 11:07:29.265814 932 log.go:172] (0xc000700370) Data frame received for 3\nI0125 11:07:29.265929 932 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0125 11:07:29.265949 932 log.go:172] (0xc0007406e0) (3) Data frame sent\nI0125 11:07:29.419364 932 log.go:172] (0xc000700370) Data frame received for 1\nI0125 11:07:29.419616 932 log.go:172] (0xc000700370) (0xc000740780) Stream removed, broadcasting: 5\nI0125 11:07:29.419790 932 log.go:172] (0xc000740640) (1) Data frame handling\nI0125 11:07:29.419849 932 log.go:172] (0xc000740640) (1) Data frame sent\nI0125 11:07:29.420060 932 log.go:172] (0xc000700370) (0xc0007406e0) Stream removed, broadcasting: 3\nI0125 11:07:29.420207 932 log.go:172] (0xc000700370) (0xc000740640) Stream removed, broadcasting: 1\nI0125 11:07:29.420264 932 log.go:172] (0xc000700370) Go away received\nI0125 11:07:29.421122 932 log.go:172] (0xc000700370) (0xc000740640) Stream removed, broadcasting: 1\nI0125 11:07:29.421297 932 log.go:172] (0xc000700370) (0xc0007406e0) Stream removed, broadcasting: 3\nI0125 11:07:29.421336 932 log.go:172] (0xc000700370) (0xc000740780) Stream removed, broadcasting: 5\n" Jan 25 11:07:29.435: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 25 11:07:29.435: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 25 11:07:39.654: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 25 11:07:49.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5n4gk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 25 11:07:51.035: INFO: stderr: "I0125 11:07:49.999509 955 log.go:172] (0xc000868210) (0xc00070a640) Create stream\nI0125 11:07:49.999692 955 log.go:172] (0xc000868210) (0xc00070a640) Stream added, broadcasting: 1\nI0125 11:07:50.005689 955 log.go:172] (0xc000868210) Reply frame received for 1\nI0125 11:07:50.005729 955 log.go:172] (0xc000868210) (0xc00039ab40) Create stream\nI0125 11:07:50.005743 955 log.go:172] (0xc000868210) (0xc00039ab40) Stream added, broadcasting: 3\nI0125 11:07:50.007571 955 log.go:172] (0xc000868210) Reply frame received for 3\nI0125 11:07:50.007596 955 log.go:172] (0xc000868210) (0xc00034a000) Create stream\nI0125 11:07:50.007606 955 log.go:172] (0xc000868210) (0xc00034a000) Stream added, broadcasting: 5\nI0125 11:07:50.009099 955 log.go:172] (0xc000868210) Reply frame received for 5\nI0125 11:07:50.843838 955 log.go:172] (0xc000868210) Data frame received for 3\nI0125 11:07:50.843910 955 log.go:172] (0xc00039ab40) (3) Data frame handling\nI0125 11:07:50.843926 955 log.go:172] (0xc00039ab40) (3) Data frame sent\nI0125 11:07:51.024536 955 log.go:172] (0xc000868210) (0xc00034a000) Stream removed, broadcasting: 5\nI0125 11:07:51.024996 955 log.go:172] (0xc000868210) (0xc00039ab40) Stream removed, broadcasting: 3\nI0125 11:07:51.025070 955 log.go:172] (0xc000868210) Data frame received for 1\nI0125 11:07:51.025086 955 log.go:172] (0xc00070a640) (1) Data frame handling\nI0125 11:07:51.025107 955 log.go:172] (0xc00070a640) (1) Data frame sent\nI0125 11:07:51.025167 955 log.go:172] (0xc000868210) (0xc00070a640) Stream removed, broadcasting: 1\nI0125 11:07:51.025188 955 log.go:172] (0xc000868210) Go away received\nI0125 11:07:51.025601 955 log.go:172] (0xc000868210) (0xc00070a640) Stream removed, broadcasting: 1\nI0125 11:07:51.025628 955 log.go:172] (0xc000868210) (0xc00039ab40) Stream removed, broadcasting: 3\nI0125 11:07:51.025645 955 log.go:172] (0xc000868210) (0xc00034a000) Stream removed, broadcasting: 5\n" Jan 25 11:07:51.035: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 25 11:07:51.035: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 25 11:08:01.144: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:08:01.144: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:01.144: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:11.165: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:08:11.165: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:11.165: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:21.176: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:08:21.176: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:31.177: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update Jan 25 11:08:31.177: INFO: Waiting for Pod e2e-tests-statefulset-5n4gk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 25 11:08:41.193: INFO: Waiting for StatefulSet e2e-tests-statefulset-5n4gk/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 25 11:08:51.189: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5n4gk Jan 25 11:08:51.196: INFO: Scaling statefulset ss2 to 0 Jan 25 11:09:21.256: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 11:09:21.263: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:09:21.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5n4gk" for this suite. Jan 25 11:09:31.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:09:31.398: INFO: namespace: e2e-tests-statefulset-5n4gk, resource: bindings, ignored listing per whitelist Jan 25 11:09:31.465: INFO: namespace e2e-tests-statefulset-5n4gk deletion completed in 10.171465791s • [SLOW TEST:225.169 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:09:31.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 25 11:09:31.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-swdgb' Jan 25 11:09:34.065: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 11:09:34.065: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 25 11:09:36.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-swdgb' Jan 25 11:09:36.930: INFO: stderr: "" Jan 25 11:09:36.930: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:09:36.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-swdgb" for this suite. Jan 25 11:09:45.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:09:45.146: INFO: namespace: e2e-tests-kubectl-swdgb, resource: bindings, ignored listing per whitelist Jan 25 11:09:45.219: INFO: namespace e2e-tests-kubectl-swdgb deletion completed in 8.270823346s • [SLOW TEST:13.753 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:09:45.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-31d2821e-3f63-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:09:46.087: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-dc49h" to be "success or failure" Jan 25 11:09:46.185: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 97.973683ms Jan 25 11:09:48.767: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.679602179s Jan 25 11:09:50.781: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.693777716s Jan 25 11:09:53.827: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.739147594s Jan 25 11:09:55.845: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.757118064s Jan 25 11:09:58.564: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.47645898s Jan 25 11:10:00.579: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.491769839s STEP: Saw pod success Jan 25 11:10:00.579: INFO: Pod "pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:10:00.583: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jan 25 11:10:02.110: INFO: Waiting for pod pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006 to disappear Jan 25 11:10:02.679: INFO: Pod pod-projected-configmaps-31d743d5-3f63-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:10:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dc49h" for this suite. Jan 25 11:10:10.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:10:11.084: INFO: namespace: e2e-tests-projected-dc49h, resource: bindings, ignored listing per whitelist Jan 25 11:10:11.113: INFO: namespace e2e-tests-projected-dc49h deletion completed in 8.418932809s • [SLOW TEST:25.894 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:10:11.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 25 11:10:27.998: INFO: Successfully updated pod "annotationupdate4109845f-3f63-11ea-8a8b-0242ac110006" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:10:30.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dgs92" for this suite. Jan 25 11:10:56.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:10:56.581: INFO: namespace: e2e-tests-projected-dgs92, resource: bindings, ignored listing per whitelist Jan 25 11:10:56.641: INFO: namespace e2e-tests-projected-dgs92 deletion completed in 26.543893465s • [SLOW TEST:45.528 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:10:56.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:10:56.790: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:11:13.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5lvlx" for this suite. Jan 25 11:11:59.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:11:59.624: INFO: namespace: e2e-tests-pods-5lvlx, resource: bindings, ignored listing per whitelist Jan 25 11:11:59.741: INFO: namespace e2e-tests-pods-5lvlx deletion completed in 46.360936385s • [SLOW TEST:63.099 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:11:59.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:12:00.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 25 11:12:00.687: INFO: stderr: "" Jan 25 11:12:00.687: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:12:00.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7wrrj" for this suite. Jan 25 11:12:06.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:12:06.890: INFO: namespace: e2e-tests-kubectl-7wrrj, resource: bindings, ignored listing per whitelist Jan 25 11:12:07.043: INFO: namespace e2e-tests-kubectl-7wrrj deletion completed in 6.290606879s • [SLOW TEST:7.302 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:12:07.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8614fe77-3f63-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:12:07.335: INFO: Waiting up to 5m0s for pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-l49hx" to be "success or failure" Jan 25 11:12:07.375: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 39.901032ms Jan 25 11:12:09.394: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058666491s Jan 25 11:12:11.432: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097377486s Jan 25 11:12:13.449: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113606139s Jan 25 11:12:15.488: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152692682s Jan 25 11:12:17.963: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.627970775s Jan 25 11:12:19.980: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.644757049s STEP: Saw pod success Jan 25 11:12:19.980: INFO: Pod "pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:12:19.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:12:20.167: INFO: Waiting for pod pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006 to disappear Jan 25 11:12:20.181: INFO: Pod pod-configmaps-86261498-3f63-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:12:20.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l49hx" for this suite. Jan 25 11:12:28.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:12:28.503: INFO: namespace: e2e-tests-configmap-l49hx, resource: bindings, ignored listing per whitelist Jan 25 11:12:28.623: INFO: namespace e2e-tests-configmap-l49hx deletion completed in 8.426046661s • [SLOW TEST:21.580 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:12:28.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 25 11:12:28.886: INFO: Waiting up to 5m0s for pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-sdcvx" to be "success or failure" Jan 25 11:12:28.902: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.498562ms Jan 25 11:12:31.125: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239184624s Jan 25 11:12:33.145: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259052844s Jan 25 11:12:35.574: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687848684s Jan 25 11:12:37.590: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.704199651s Jan 25 11:12:39.737: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.851023178s STEP: Saw pod success Jan 25 11:12:39.737: INFO: Pod "pod-92f238af-3f63-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:12:39.751: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-92f238af-3f63-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:12:39.958: INFO: Waiting for pod pod-92f238af-3f63-11ea-8a8b-0242ac110006 to disappear Jan 25 11:12:39.989: INFO: Pod pod-92f238af-3f63-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:12:39.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sdcvx" for this suite. Jan 25 11:12:46.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:12:46.242: INFO: namespace: e2e-tests-emptydir-sdcvx, resource: bindings, ignored listing per whitelist Jan 25 11:12:46.390: INFO: namespace e2e-tests-emptydir-sdcvx deletion completed in 6.388534875s • [SLOW TEST:17.766 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:12:46.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 25 11:12:56.925: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:13:22.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-blpmp" for this suite. Jan 25 11:13:28.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:13:28.452: INFO: namespace: e2e-tests-namespaces-blpmp, resource: bindings, ignored listing per whitelist Jan 25 11:13:29.026: INFO: namespace e2e-tests-namespaces-blpmp deletion completed in 6.937873104s STEP: Destroying namespace "e2e-tests-nsdeletetest-dwgqf" for this suite. Jan 25 11:13:29.144: INFO: Namespace e2e-tests-nsdeletetest-dwgqf was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-9tpzk" for this suite. Jan 25 11:13:35.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:13:35.742: INFO: namespace: e2e-tests-nsdeletetest-9tpzk, resource: bindings, ignored listing per whitelist Jan 25 11:13:35.777: INFO: namespace e2e-tests-nsdeletetest-9tpzk deletion completed in 6.632137571s • [SLOW TEST:49.386 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:13:35.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:13:35.984: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:13:37.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-qxrzn" for this suite. Jan 25 11:13:43.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:13:43.516: INFO: namespace: e2e-tests-custom-resource-definition-qxrzn, resource: bindings, ignored listing per whitelist Jan 25 11:13:43.564: INFO: namespace e2e-tests-custom-resource-definition-qxrzn deletion completed in 6.282733426s • [SLOW TEST:7.787 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:13:43.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-bfb17bc3-3f63-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:13:43.906: INFO: Waiting up to 5m0s for pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-9kjx5" to be "success or failure" Jan 25 11:13:43.914: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.333861ms Jan 25 11:13:45.959: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052247647s Jan 25 11:13:47.981: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074233843s Jan 25 11:13:50.031: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124302616s Jan 25 11:13:52.073: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166066052s Jan 25 11:13:54.166: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.259109301s STEP: Saw pod success Jan 25 11:13:54.166: INFO: Pod "pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:13:54.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:13:54.303: INFO: Waiting for pod pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006 to disappear Jan 25 11:13:54.320: INFO: Pod pod-configmaps-bfb31502-3f63-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:13:54.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9kjx5" for this suite. Jan 25 11:14:00.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:14:00.483: INFO: namespace: e2e-tests-configmap-9kjx5, resource: bindings, ignored listing per whitelist Jan 25 11:14:00.651: INFO: namespace e2e-tests-configmap-9kjx5 deletion completed in 6.318175203s • [SLOW TEST:17.087 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:14:00.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:14:01.296: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c9f89ed1-3f63-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0005164ca), BlockOwnerDeletion:(*bool)(0xc0005164cb)}} Jan 25 11:14:01.332: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c9f30eb9-3f63-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00001c182), BlockOwnerDeletion:(*bool)(0xc00001c183)}} Jan 25 11:14:01.453: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c9f7523f-3f63-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00001d92a), BlockOwnerDeletion:(*bool)(0xc00001d92b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:14:06.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bxpwg" for this suite. Jan 25 11:14:14.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:14:14.796: INFO: namespace: e2e-tests-gc-bxpwg, resource: bindings, ignored listing per whitelist Jan 25 11:14:14.894: INFO: namespace e2e-tests-gc-bxpwg deletion completed in 8.359334461s • [SLOW TEST:14.242 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:14:14.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 25 11:14:15.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ztcwq' Jan 25 11:14:15.196: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 11:14:15.197: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 25 11:14:17.539: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-6n8kp] Jan 25 11:14:17.539: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-6n8kp" in namespace "e2e-tests-kubectl-ztcwq" to be "running and ready" Jan 25 11:14:17.548: INFO: Pod "e2e-test-nginx-rc-6n8kp": Phase="Pending", Reason="", readiness=false. Elapsed: 9.007186ms Jan 25 11:14:19.569: INFO: Pod "e2e-test-nginx-rc-6n8kp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029649859s Jan 25 11:14:21.832: INFO: Pod "e2e-test-nginx-rc-6n8kp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292395773s Jan 25 11:14:23.846: INFO: Pod "e2e-test-nginx-rc-6n8kp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306400084s Jan 25 11:14:25.871: INFO: Pod "e2e-test-nginx-rc-6n8kp": Phase="Running", Reason="", readiness=true. Elapsed: 8.331567999s Jan 25 11:14:25.871: INFO: Pod "e2e-test-nginx-rc-6n8kp" satisfied condition "running and ready" Jan 25 11:14:25.871: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-6n8kp] Jan 25 11:14:25.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ztcwq' Jan 25 11:14:26.141: INFO: stderr: "" Jan 25 11:14:26.142: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 25 11:14:26.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ztcwq' Jan 25 11:14:26.337: INFO: stderr: "" Jan 25 11:14:26.337: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:14:26.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ztcwq" for this suite. Jan 25 11:14:42.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:14:42.744: INFO: namespace: e2e-tests-kubectl-ztcwq, resource: bindings, ignored listing per whitelist Jan 25 11:14:42.790: INFO: namespace e2e-tests-kubectl-ztcwq deletion completed in 16.303925453s • [SLOW TEST:27.896 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:14:42.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 25 11:14:43.022: INFO: Waiting up to 5m0s for pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-zqdlw" to be "success or failure" Jan 25 11:14:43.188: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 166.054074ms Jan 25 11:14:45.495: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472954178s Jan 25 11:14:47.508: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485753108s Jan 25 11:14:49.616: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594616316s Jan 25 11:14:51.680: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.65806292s Jan 25 11:14:53.718: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.696201166s STEP: Saw pod success Jan 25 11:14:53.718: INFO: Pod "downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:14:53.730: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006 container dapi-container: STEP: delete the pod Jan 25 11:14:54.827: INFO: Waiting for pod downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006 to disappear Jan 25 11:14:54.838: INFO: Pod downward-api-e2f1d499-3f63-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:14:54.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zqdlw" for this suite. Jan 25 11:15:00.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:15:01.034: INFO: namespace: e2e-tests-downward-api-zqdlw, resource: bindings, ignored listing per whitelist Jan 25 11:15:01.071: INFO: namespace e2e-tests-downward-api-zqdlw deletion completed in 6.225075503s • [SLOW TEST:18.281 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:15:01.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0125 11:15:43.392568 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 11:15:43.392: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:15:43.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5qsbr" for this suite. Jan 25 11:16:05.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:16:05.614: INFO: namespace: e2e-tests-gc-5qsbr, resource: bindings, ignored listing per whitelist Jan 25 11:16:05.749: INFO: namespace e2e-tests-gc-5qsbr deletion completed in 22.352177933s • [SLOW TEST:64.678 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:16:05.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 25 11:16:06.074: INFO: Waiting up to 5m0s for pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-qbq9p" to be "success or failure" Jan 25 11:16:06.152: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 77.12481ms Jan 25 11:16:08.178: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103356418s Jan 25 11:16:10.199: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124687558s Jan 25 11:16:12.221: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147052113s Jan 25 11:16:14.239: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164530788s Jan 25 11:16:16.253: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178156268s STEP: Saw pod success Jan 25 11:16:16.253: INFO: Pod "pod-1473774c-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:16:16.256: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1473774c-3f64-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:16:16.489: INFO: Waiting for pod pod-1473774c-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:16:16.527: INFO: Pod pod-1473774c-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:16:16.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qbq9p" for this suite. Jan 25 11:16:23.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:16:23.101: INFO: namespace: e2e-tests-emptydir-qbq9p, resource: bindings, ignored listing per whitelist Jan 25 11:16:23.298: INFO: namespace e2e-tests-emptydir-qbq9p deletion completed in 6.759016915s • [SLOW TEST:17.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:16:23.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-1ed93915-3f64-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:16:23.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-hk829" to be "success or failure" Jan 25 11:16:23.763: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 203.763435ms Jan 25 11:16:25.888: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328702082s Jan 25 11:16:27.907: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347999337s Jan 25 11:16:29.923: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363823605s Jan 25 11:16:33.329: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.770147359s Jan 25 11:16:35.344: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.784877354s Jan 25 11:16:37.362: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.802901613s Jan 25 11:16:39.380: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.820815207s STEP: Saw pod success Jan 25 11:16:39.380: INFO: Pod "pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:16:39.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006 container projected-secret-volume-test: STEP: delete the pod Jan 25 11:16:39.766: INFO: Waiting for pod pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:16:39.962: INFO: Pod pod-projected-secrets-1eddc5de-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:16:39.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hk829" for this suite. Jan 25 11:16:47.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:16:47.556: INFO: namespace: e2e-tests-projected-hk829, resource: bindings, ignored listing per whitelist Jan 25 11:16:47.671: INFO: namespace e2e-tests-projected-hk829 deletion completed in 7.692708408s • [SLOW TEST:24.373 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:16:47.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-2d8653c9-3f64-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:16:48.156: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-bzrkr" to be "success or failure" Jan 25 11:16:48.168: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.658013ms Jan 25 11:16:51.441: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.285045777s Jan 25 11:16:53.463: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.307037022s Jan 25 11:16:55.492: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.33571092s Jan 25 11:16:58.038: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.882444902s Jan 25 11:17:00.179: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022781837s Jan 25 11:17:02.203: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.047267291s STEP: Saw pod success Jan 25 11:17:02.204: INFO: Pod "pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:17:02.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006 container secret-volume-test: STEP: delete the pod Jan 25 11:17:03.693: INFO: Waiting for pod pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:17:03.844: INFO: Pod pod-projected-secrets-2d8802f8-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:17:03.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bzrkr" for this suite. Jan 25 11:17:09.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:17:09.990: INFO: namespace: e2e-tests-projected-bzrkr, resource: bindings, ignored listing per whitelist Jan 25 11:17:10.028: INFO: namespace e2e-tests-projected-bzrkr deletion completed in 6.164030397s • [SLOW TEST:22.357 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:17:10.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0125 11:17:40.948839 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 11:17:40.948: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:17:40.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wjvtz" for this suite. Jan 25 11:17:52.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:17:53.150: INFO: namespace: e2e-tests-gc-wjvtz, resource: bindings, ignored listing per whitelist Jan 25 11:17:53.202: INFO: namespace e2e-tests-gc-wjvtz deletion completed in 12.249353439s • [SLOW TEST:43.174 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:17:53.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-8k9rm I0125 11:17:53.465493 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-8k9rm, replica count: 1 I0125 11:17:54.516294 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:17:55.516987 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:17:56.517844 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:17:57.518361 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:17:58.519174 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:17:59.519900 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:18:00.520647 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:18:01.521222 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:18:02.521901 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:18:03.522517 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 11:18:04.523044 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 11:18:04.699: INFO: Created: latency-svc-dmjmk Jan 25 11:18:04.839: INFO: Got endpoints: latency-svc-dmjmk [216.125088ms] Jan 25 11:18:05.106: INFO: Created: latency-svc-ckl2t Jan 25 11:18:05.262: INFO: Got endpoints: latency-svc-ckl2t [420.815491ms] Jan 25 11:18:05.287: INFO: Created: latency-svc-95c24 Jan 25 11:18:05.302: INFO: Got endpoints: latency-svc-95c24 [460.917369ms] Jan 25 11:18:05.494: INFO: Created: latency-svc-96ldw Jan 25 11:18:05.504: INFO: Got endpoints: latency-svc-96ldw [242.191855ms] Jan 25 11:18:05.598: INFO: Created: latency-svc-m2wr7 Jan 25 11:18:05.761: INFO: Got endpoints: latency-svc-m2wr7 [921.251655ms] Jan 25 11:18:05.897: INFO: Created: latency-svc-6pttt Jan 25 11:18:06.046: INFO: Got endpoints: latency-svc-6pttt [1.204518999s] Jan 25 11:18:06.129: INFO: Created: latency-svc-r42hg Jan 25 11:18:06.249: INFO: Got endpoints: latency-svc-r42hg [1.409111818s] Jan 25 11:18:06.280: INFO: Created: latency-svc-mx9bz Jan 25 11:18:06.312: INFO: Got endpoints: latency-svc-mx9bz [1.470876321s] Jan 25 11:18:06.493: INFO: Created: latency-svc-q64sc Jan 25 11:18:06.511: INFO: Got endpoints: latency-svc-q64sc [1.669914966s] Jan 25 11:18:06.659: INFO: Created: latency-svc-f7wf7 Jan 25 11:18:06.725: INFO: Got endpoints: latency-svc-f7wf7 [1.883217706s] Jan 25 11:18:06.928: INFO: Created: latency-svc-sjmhh Jan 25 11:18:06.946: INFO: Got endpoints: latency-svc-sjmhh [2.105528149s] Jan 25 11:18:07.123: INFO: Created: latency-svc-jtjdp Jan 25 11:18:07.128: INFO: Got endpoints: latency-svc-jtjdp [2.287100873s] Jan 25 11:18:07.328: INFO: Created: latency-svc-q2gdt Jan 25 11:18:07.343: INFO: Got endpoints: latency-svc-q2gdt [2.50139026s] Jan 25 11:18:07.499: INFO: Created: latency-svc-kqn2w Jan 25 11:18:07.520: INFO: Got endpoints: latency-svc-kqn2w [2.679175609s] Jan 25 11:18:07.560: INFO: Created: latency-svc-zvrx5 Jan 25 11:18:07.575: INFO: Got endpoints: latency-svc-zvrx5 [2.734720556s] Jan 25 11:18:07.750: INFO: Created: latency-svc-xc4kn Jan 25 11:18:07.788: INFO: Got endpoints: latency-svc-xc4kn [2.946324646s] Jan 25 11:18:07.928: INFO: Created: latency-svc-bk4h6 Jan 25 11:18:07.953: INFO: Got endpoints: latency-svc-bk4h6 [3.11230317s] Jan 25 11:18:08.014: INFO: Created: latency-svc-5clr6 Jan 25 11:18:08.132: INFO: Got endpoints: latency-svc-5clr6 [2.829145163s] Jan 25 11:18:08.175: INFO: Created: latency-svc-j6zvm Jan 25 11:18:08.183: INFO: Got endpoints: latency-svc-j6zvm [2.678305312s] Jan 25 11:18:08.339: INFO: Created: latency-svc-sdg9n Jan 25 11:18:08.370: INFO: Got endpoints: latency-svc-sdg9n [2.608782954s] Jan 25 11:18:08.426: INFO: Created: latency-svc-frk6l Jan 25 11:18:08.501: INFO: Got endpoints: latency-svc-frk6l [2.454611456s] Jan 25 11:18:08.782: INFO: Created: latency-svc-v6sml Jan 25 11:18:08.791: INFO: Got endpoints: latency-svc-v6sml [2.541432997s] Jan 25 11:18:08.996: INFO: Created: latency-svc-xp2md Jan 25 11:18:09.002: INFO: Got endpoints: latency-svc-xp2md [2.689624958s] Jan 25 11:18:09.265: INFO: Created: latency-svc-rtx9r Jan 25 11:18:09.303: INFO: Got endpoints: latency-svc-rtx9r [2.791326159s] Jan 25 11:18:09.479: INFO: Created: latency-svc-7tdxq Jan 25 11:18:09.491: INFO: Got endpoints: latency-svc-7tdxq [2.765889406s] Jan 25 11:18:09.534: INFO: Created: latency-svc-gqpsf Jan 25 11:18:09.556: INFO: Got endpoints: latency-svc-gqpsf [2.609507376s] Jan 25 11:18:09.672: INFO: Created: latency-svc-pg2b7 Jan 25 11:18:09.698: INFO: Got endpoints: latency-svc-pg2b7 [2.569497607s] Jan 25 11:18:09.773: INFO: Created: latency-svc-wn4kh Jan 25 11:18:09.881: INFO: Got endpoints: latency-svc-wn4kh [2.537470322s] Jan 25 11:18:09.941: INFO: Created: latency-svc-ttsjk Jan 25 11:18:10.077: INFO: Got endpoints: latency-svc-ttsjk [2.556168156s] Jan 25 11:18:10.103: INFO: Created: latency-svc-trzgj Jan 25 11:18:10.125: INFO: Got endpoints: latency-svc-trzgj [2.550413347s] Jan 25 11:18:10.345: INFO: Created: latency-svc-zw8dh Jan 25 11:18:10.346: INFO: Got endpoints: latency-svc-zw8dh [2.557639541s] Jan 25 11:18:10.561: INFO: Created: latency-svc-cgkzx Jan 25 11:18:10.618: INFO: Got endpoints: latency-svc-cgkzx [2.663869464s] Jan 25 11:18:10.862: INFO: Created: latency-svc-wldzz Jan 25 11:18:10.914: INFO: Got endpoints: latency-svc-wldzz [2.781873169s] Jan 25 11:18:11.115: INFO: Created: latency-svc-lpcpd Jan 25 11:18:11.136: INFO: Got endpoints: latency-svc-lpcpd [2.953098672s] Jan 25 11:18:11.335: INFO: Created: latency-svc-8cxl8 Jan 25 11:18:11.355: INFO: Got endpoints: latency-svc-8cxl8 [2.984522237s] Jan 25 11:18:11.545: INFO: Created: latency-svc-fdpbc Jan 25 11:18:11.547: INFO: Got endpoints: latency-svc-fdpbc [3.045940742s] Jan 25 11:18:11.605: INFO: Created: latency-svc-vxm2k Jan 25 11:18:11.711: INFO: Got endpoints: latency-svc-vxm2k [2.920452465s] Jan 25 11:18:11.752: INFO: Created: latency-svc-4w4sc Jan 25 11:18:11.762: INFO: Got endpoints: latency-svc-4w4sc [2.759888086s] Jan 25 11:18:11.920: INFO: Created: latency-svc-q66fg Jan 25 11:18:11.945: INFO: Got endpoints: latency-svc-q66fg [2.642045768s] Jan 25 11:18:12.084: INFO: Created: latency-svc-pgx28 Jan 25 11:18:12.109: INFO: Got endpoints: latency-svc-pgx28 [2.618044512s] Jan 25 11:18:12.297: INFO: Created: latency-svc-rn5cc Jan 25 11:18:12.318: INFO: Got endpoints: latency-svc-rn5cc [2.761626569s] Jan 25 11:18:12.384: INFO: Created: latency-svc-r49sq Jan 25 11:18:12.589: INFO: Got endpoints: latency-svc-r49sq [2.890538998s] Jan 25 11:18:12.663: INFO: Created: latency-svc-96b2n Jan 25 11:18:12.664: INFO: Got endpoints: latency-svc-96b2n [2.782766388s] Jan 25 11:18:12.874: INFO: Created: latency-svc-lrhpk Jan 25 11:18:12.891: INFO: Got endpoints: latency-svc-lrhpk [2.813632858s] Jan 25 11:18:13.048: INFO: Created: latency-svc-4pjg7 Jan 25 11:18:13.071: INFO: Got endpoints: latency-svc-4pjg7 [2.945217798s] Jan 25 11:18:13.216: INFO: Created: latency-svc-7wstv Jan 25 11:18:13.253: INFO: Got endpoints: latency-svc-7wstv [2.906988163s] Jan 25 11:18:13.433: INFO: Created: latency-svc-mlbbw Jan 25 11:18:13.451: INFO: Got endpoints: latency-svc-mlbbw [2.833264607s] Jan 25 11:18:13.522: INFO: Created: latency-svc-7jtts Jan 25 11:18:13.629: INFO: Got endpoints: latency-svc-7jtts [2.714904616s] Jan 25 11:18:13.649: INFO: Created: latency-svc-hh5zh Jan 25 11:18:13.666: INFO: Got endpoints: latency-svc-hh5zh [2.529765544s] Jan 25 11:18:13.719: INFO: Created: latency-svc-9zjcz Jan 25 11:18:13.900: INFO: Got endpoints: latency-svc-9zjcz [2.545253288s] Jan 25 11:18:13.928: INFO: Created: latency-svc-dngnq Jan 25 11:18:13.947: INFO: Got endpoints: latency-svc-dngnq [2.3999277s] Jan 25 11:18:14.191: INFO: Created: latency-svc-s6k9z Jan 25 11:18:14.203: INFO: Got endpoints: latency-svc-s6k9z [2.491108401s] Jan 25 11:18:14.438: INFO: Created: latency-svc-5nvrq Jan 25 11:18:14.444: INFO: Got endpoints: latency-svc-5nvrq [2.68178231s] Jan 25 11:18:14.680: INFO: Created: latency-svc-2fdf2 Jan 25 11:18:14.740: INFO: Got endpoints: latency-svc-2fdf2 [2.794986596s] Jan 25 11:18:14.881: INFO: Created: latency-svc-mzswd Jan 25 11:18:15.052: INFO: Created: latency-svc-mdvls Jan 25 11:18:15.062: INFO: Got endpoints: latency-svc-mzswd [2.95262904s] Jan 25 11:18:15.100: INFO: Got endpoints: latency-svc-mdvls [2.782063369s] Jan 25 11:18:15.239: INFO: Created: latency-svc-hdvj7 Jan 25 11:18:15.253: INFO: Got endpoints: latency-svc-hdvj7 [2.663558962s] Jan 25 11:18:15.558: INFO: Created: latency-svc-jg9cg Jan 25 11:18:15.817: INFO: Got endpoints: latency-svc-jg9cg [3.153394511s] Jan 25 11:18:15.841: INFO: Created: latency-svc-4jc44 Jan 25 11:18:15.883: INFO: Got endpoints: latency-svc-4jc44 [2.991627797s] Jan 25 11:18:16.104: INFO: Created: latency-svc-xvtsl Jan 25 11:18:16.109: INFO: Got endpoints: latency-svc-xvtsl [3.037589925s] Jan 25 11:18:16.382: INFO: Created: latency-svc-2gxxz Jan 25 11:18:16.427: INFO: Got endpoints: latency-svc-2gxxz [3.17390554s] Jan 25 11:18:16.567: INFO: Created: latency-svc-t4pt9 Jan 25 11:18:16.771: INFO: Got endpoints: latency-svc-t4pt9 [3.320188s] Jan 25 11:18:16.805: INFO: Created: latency-svc-mgws2 Jan 25 11:18:17.023: INFO: Got endpoints: latency-svc-mgws2 [3.393258014s] Jan 25 11:18:17.049: INFO: Created: latency-svc-m2hk8 Jan 25 11:18:17.086: INFO: Got endpoints: latency-svc-m2hk8 [3.420111343s] Jan 25 11:18:17.220: INFO: Created: latency-svc-lxdzv Jan 25 11:18:17.235: INFO: Got endpoints: latency-svc-lxdzv [3.334577917s] Jan 25 11:18:17.304: INFO: Created: latency-svc-7794m Jan 25 11:18:17.479: INFO: Got endpoints: latency-svc-7794m [3.531455776s] Jan 25 11:18:17.515: INFO: Created: latency-svc-t4pdp Jan 25 11:18:17.592: INFO: Created: latency-svc-xnx59 Jan 25 11:18:17.592: INFO: Got endpoints: latency-svc-t4pdp [3.389076216s] Jan 25 11:18:17.751: INFO: Got endpoints: latency-svc-xnx59 [3.306566926s] Jan 25 11:18:17.817: INFO: Created: latency-svc-dbq4w Jan 25 11:18:17.838: INFO: Got endpoints: latency-svc-dbq4w [3.09726964s] Jan 25 11:18:18.055: INFO: Created: latency-svc-7qsq4 Jan 25 11:18:18.089: INFO: Got endpoints: latency-svc-7qsq4 [3.026910845s] Jan 25 11:18:18.239: INFO: Created: latency-svc-qwndl Jan 25 11:18:18.254: INFO: Got endpoints: latency-svc-qwndl [3.153436336s] Jan 25 11:18:18.400: INFO: Created: latency-svc-g9n4f Jan 25 11:18:18.454: INFO: Got endpoints: latency-svc-g9n4f [3.201190988s] Jan 25 11:18:18.576: INFO: Created: latency-svc-2bz4x Jan 25 11:18:18.617: INFO: Got endpoints: latency-svc-2bz4x [2.798947378s] Jan 25 11:18:18.672: INFO: Created: latency-svc-594jj Jan 25 11:18:18.860: INFO: Got endpoints: latency-svc-594jj [2.977082361s] Jan 25 11:18:18.921: INFO: Created: latency-svc-j69cv Jan 25 11:18:18.929: INFO: Got endpoints: latency-svc-j69cv [2.820355429s] Jan 25 11:18:19.060: INFO: Created: latency-svc-hbg9b Jan 25 11:18:19.075: INFO: Got endpoints: latency-svc-hbg9b [2.648046164s] Jan 25 11:18:19.249: INFO: Created: latency-svc-zhcfm Jan 25 11:18:19.260: INFO: Got endpoints: latency-svc-zhcfm [2.488512313s] Jan 25 11:18:19.540: INFO: Created: latency-svc-gf494 Jan 25 11:18:19.565: INFO: Got endpoints: latency-svc-gf494 [2.541446031s] Jan 25 11:18:20.067: INFO: Created: latency-svc-8dckk Jan 25 11:18:20.102: INFO: Got endpoints: latency-svc-8dckk [3.015417328s] Jan 25 11:18:20.364: INFO: Created: latency-svc-lt2ht Jan 25 11:18:20.514: INFO: Got endpoints: latency-svc-lt2ht [3.278568823s] Jan 25 11:18:20.585: INFO: Created: latency-svc-ndmk7 Jan 25 11:18:20.791: INFO: Got endpoints: latency-svc-ndmk7 [3.311874164s] Jan 25 11:18:20.810: INFO: Created: latency-svc-qgkdd Jan 25 11:18:20.832: INFO: Got endpoints: latency-svc-qgkdd [3.240051351s] Jan 25 11:18:21.108: INFO: Created: latency-svc-wtqr9 Jan 25 11:18:21.125: INFO: Got endpoints: latency-svc-wtqr9 [3.373796364s] Jan 25 11:18:21.319: INFO: Created: latency-svc-7kc2c Jan 25 11:18:21.338: INFO: Got endpoints: latency-svc-7kc2c [3.499502935s] Jan 25 11:18:21.655: INFO: Created: latency-svc-56p6v Jan 25 11:18:21.688: INFO: Got endpoints: latency-svc-56p6v [3.598941614s] Jan 25 11:18:21.895: INFO: Created: latency-svc-pkjwz Jan 25 11:18:21.942: INFO: Got endpoints: latency-svc-pkjwz [3.687691195s] Jan 25 11:18:22.188: INFO: Created: latency-svc-z7nnv Jan 25 11:18:22.217: INFO: Got endpoints: latency-svc-z7nnv [3.762045828s] Jan 25 11:18:22.413: INFO: Created: latency-svc-n7mgp Jan 25 11:18:22.635: INFO: Got endpoints: latency-svc-n7mgp [4.017938318s] Jan 25 11:18:22.721: INFO: Created: latency-svc-kcw9c Jan 25 11:18:22.957: INFO: Created: latency-svc-h5h52 Jan 25 11:18:22.959: INFO: Got endpoints: latency-svc-kcw9c [4.098566189s] Jan 25 11:18:23.106: INFO: Got endpoints: latency-svc-h5h52 [4.176545015s] Jan 25 11:18:23.471: INFO: Created: latency-svc-48mrs Jan 25 11:18:23.487: INFO: Got endpoints: latency-svc-48mrs [4.411350598s] Jan 25 11:18:24.574: INFO: Created: latency-svc-lppnj Jan 25 11:18:24.613: INFO: Got endpoints: latency-svc-lppnj [5.353106374s] Jan 25 11:18:25.011: INFO: Created: latency-svc-t7dvw Jan 25 11:18:25.195: INFO: Got endpoints: latency-svc-t7dvw [5.629715242s] Jan 25 11:18:25.405: INFO: Created: latency-svc-rq9mf Jan 25 11:18:25.598: INFO: Got endpoints: latency-svc-rq9mf [5.495519713s] Jan 25 11:18:25.623: INFO: Created: latency-svc-lfcdr Jan 25 11:18:25.627: INFO: Got endpoints: latency-svc-lfcdr [5.112448487s] Jan 25 11:18:25.961: INFO: Created: latency-svc-pjhj2 Jan 25 11:18:25.981: INFO: Got endpoints: latency-svc-pjhj2 [5.190053566s] Jan 25 11:18:26.149: INFO: Created: latency-svc-vkdbs Jan 25 11:18:26.166: INFO: Got endpoints: latency-svc-vkdbs [5.333603922s] Jan 25 11:18:26.414: INFO: Created: latency-svc-wmwfp Jan 25 11:18:26.438: INFO: Got endpoints: latency-svc-wmwfp [5.313233168s] Jan 25 11:18:26.676: INFO: Created: latency-svc-q7t7t Jan 25 11:18:26.697: INFO: Got endpoints: latency-svc-q7t7t [5.359329696s] Jan 25 11:18:26.936: INFO: Created: latency-svc-p5gmv Jan 25 11:18:26.946: INFO: Got endpoints: latency-svc-p5gmv [5.257393324s] Jan 25 11:18:27.188: INFO: Created: latency-svc-klqjm Jan 25 11:18:27.302: INFO: Got endpoints: latency-svc-klqjm [5.360060956s] Jan 25 11:18:27.356: INFO: Created: latency-svc-8tc92 Jan 25 11:18:27.385: INFO: Got endpoints: latency-svc-8tc92 [5.168065197s] Jan 25 11:18:27.593: INFO: Created: latency-svc-lrbvp Jan 25 11:18:27.600: INFO: Got endpoints: latency-svc-lrbvp [4.964326189s] Jan 25 11:18:27.771: INFO: Created: latency-svc-bgxtn Jan 25 11:18:27.807: INFO: Got endpoints: latency-svc-bgxtn [4.846937348s] Jan 25 11:18:28.033: INFO: Created: latency-svc-kbvfg Jan 25 11:18:28.050: INFO: Got endpoints: latency-svc-kbvfg [4.94358432s] Jan 25 11:18:28.291: INFO: Created: latency-svc-w67dd Jan 25 11:18:28.383: INFO: Got endpoints: latency-svc-w67dd [4.896054621s] Jan 25 11:18:28.566: INFO: Created: latency-svc-dtjrd Jan 25 11:18:28.742: INFO: Got endpoints: latency-svc-dtjrd [4.127876152s] Jan 25 11:18:28.921: INFO: Created: latency-svc-chgfh Jan 25 11:18:29.117: INFO: Got endpoints: latency-svc-chgfh [3.921916768s] Jan 25 11:18:29.119: INFO: Created: latency-svc-5wmmd Jan 25 11:18:29.144: INFO: Got endpoints: latency-svc-5wmmd [3.545388642s] Jan 25 11:18:29.396: INFO: Created: latency-svc-tll7j Jan 25 11:18:29.508: INFO: Got endpoints: latency-svc-tll7j [3.880983445s] Jan 25 11:18:29.532: INFO: Created: latency-svc-dxj44 Jan 25 11:18:29.541: INFO: Got endpoints: latency-svc-dxj44 [3.55874336s] Jan 25 11:18:29.766: INFO: Created: latency-svc-vfdz7 Jan 25 11:18:29.799: INFO: Got endpoints: latency-svc-vfdz7 [3.632618998s] Jan 25 11:18:29.993: INFO: Created: latency-svc-czfg9 Jan 25 11:18:30.017: INFO: Got endpoints: latency-svc-czfg9 [3.578465773s] Jan 25 11:18:30.214: INFO: Created: latency-svc-cssp9 Jan 25 11:18:30.242: INFO: Got endpoints: latency-svc-cssp9 [3.544233724s] Jan 25 11:18:30.504: INFO: Created: latency-svc-k2jdh Jan 25 11:18:30.542: INFO: Got endpoints: latency-svc-k2jdh [3.595286293s] Jan 25 11:18:30.699: INFO: Created: latency-svc-r8qfv Jan 25 11:18:30.723: INFO: Got endpoints: latency-svc-r8qfv [3.419893684s] Jan 25 11:18:30.944: INFO: Created: latency-svc-f27xw Jan 25 11:18:30.961: INFO: Got endpoints: latency-svc-f27xw [3.576056684s] Jan 25 11:18:31.118: INFO: Created: latency-svc-5wk6h Jan 25 11:18:31.134: INFO: Got endpoints: latency-svc-5wk6h [3.534538906s] Jan 25 11:18:31.326: INFO: Created: latency-svc-tlnsz Jan 25 11:18:31.341: INFO: Got endpoints: latency-svc-tlnsz [3.533379693s] Jan 25 11:18:31.540: INFO: Created: latency-svc-snb95 Jan 25 11:18:31.559: INFO: Got endpoints: latency-svc-snb95 [3.509115025s] Jan 25 11:18:31.762: INFO: Created: latency-svc-k87f2 Jan 25 11:18:31.967: INFO: Got endpoints: latency-svc-k87f2 [3.582861556s] Jan 25 11:18:31.997: INFO: Created: latency-svc-gtrcq Jan 25 11:18:32.021: INFO: Got endpoints: latency-svc-gtrcq [3.278864952s] Jan 25 11:18:33.221: INFO: Created: latency-svc-dhl6v Jan 25 11:18:33.426: INFO: Got endpoints: latency-svc-dhl6v [4.308604334s] Jan 25 11:18:33.467: INFO: Created: latency-svc-k8xgq Jan 25 11:18:33.490: INFO: Got endpoints: latency-svc-k8xgq [4.346576994s] Jan 25 11:18:33.652: INFO: Created: latency-svc-kblvb Jan 25 11:18:33.687: INFO: Got endpoints: latency-svc-kblvb [4.178687124s] Jan 25 11:18:33.903: INFO: Created: latency-svc-w7mdx Jan 25 11:18:33.952: INFO: Got endpoints: latency-svc-w7mdx [4.411342438s] Jan 25 11:18:34.077: INFO: Created: latency-svc-mjg6t Jan 25 11:18:34.111: INFO: Got endpoints: latency-svc-mjg6t [4.311507955s] Jan 25 11:18:34.280: INFO: Created: latency-svc-rzjwm Jan 25 11:18:34.363: INFO: Got endpoints: latency-svc-rzjwm [4.345657098s] Jan 25 11:18:34.531: INFO: Created: latency-svc-gwzlv Jan 25 11:18:34.573: INFO: Got endpoints: latency-svc-gwzlv [4.331487645s] Jan 25 11:18:34.695: INFO: Created: latency-svc-s48ts Jan 25 11:18:34.717: INFO: Got endpoints: latency-svc-s48ts [4.175287696s] Jan 25 11:18:34.868: INFO: Created: latency-svc-hdz2j Jan 25 11:18:34.917: INFO: Got endpoints: latency-svc-hdz2j [4.193707906s] Jan 25 11:18:35.108: INFO: Created: latency-svc-zfsjf Jan 25 11:18:35.141: INFO: Got endpoints: latency-svc-zfsjf [4.179105328s] Jan 25 11:18:35.348: INFO: Created: latency-svc-rj9mb Jan 25 11:18:35.378: INFO: Got endpoints: latency-svc-rj9mb [4.243700188s] Jan 25 11:18:35.514: INFO: Created: latency-svc-28c7n Jan 25 11:18:35.722: INFO: Created: latency-svc-cdw2f Jan 25 11:18:35.730: INFO: Got endpoints: latency-svc-28c7n [4.388754451s] Jan 25 11:18:35.783: INFO: Got endpoints: latency-svc-cdw2f [4.223442258s] Jan 25 11:18:35.950: INFO: Created: latency-svc-v8qv5 Jan 25 11:18:35.961: INFO: Got endpoints: latency-svc-v8qv5 [3.993485394s] Jan 25 11:18:36.207: INFO: Created: latency-svc-7g8x9 Jan 25 11:18:36.232: INFO: Got endpoints: latency-svc-7g8x9 [4.210918315s] Jan 25 11:18:36.499: INFO: Created: latency-svc-7ltf6 Jan 25 11:18:36.503: INFO: Got endpoints: latency-svc-7ltf6 [3.077228151s] Jan 25 11:18:36.695: INFO: Created: latency-svc-dbjtv Jan 25 11:18:36.695: INFO: Got endpoints: latency-svc-dbjtv [3.204812483s] Jan 25 11:18:36.900: INFO: Created: latency-svc-fspsl Jan 25 11:18:36.926: INFO: Got endpoints: latency-svc-fspsl [3.239047081s] Jan 25 11:18:37.117: INFO: Created: latency-svc-7ftwr Jan 25 11:18:37.118: INFO: Got endpoints: latency-svc-7ftwr [3.165412865s] Jan 25 11:18:37.381: INFO: Created: latency-svc-xjvmv Jan 25 11:18:37.421: INFO: Got endpoints: latency-svc-xjvmv [3.310505483s] Jan 25 11:18:37.631: INFO: Created: latency-svc-f7jzv Jan 25 11:18:37.669: INFO: Got endpoints: latency-svc-f7jzv [3.305981734s] Jan 25 11:18:37.929: INFO: Created: latency-svc-nmgp8 Jan 25 11:18:38.070: INFO: Got endpoints: latency-svc-nmgp8 [3.496057829s] Jan 25 11:18:38.117: INFO: Created: latency-svc-qg7dw Jan 25 11:18:38.159: INFO: Got endpoints: latency-svc-qg7dw [3.441466503s] Jan 25 11:18:38.646: INFO: Created: latency-svc-zkr4n Jan 25 11:18:38.818: INFO: Got endpoints: latency-svc-zkr4n [3.900715727s] Jan 25 11:18:38.839: INFO: Created: latency-svc-f55k5 Jan 25 11:18:38.878: INFO: Got endpoints: latency-svc-f55k5 [3.737006284s] Jan 25 11:18:39.028: INFO: Created: latency-svc-xb4vc Jan 25 11:18:39.042: INFO: Got endpoints: latency-svc-xb4vc [3.663358742s] Jan 25 11:18:39.139: INFO: Created: latency-svc-9hwnt Jan 25 11:18:39.282: INFO: Got endpoints: latency-svc-9hwnt [3.552612866s] Jan 25 11:18:39.319: INFO: Created: latency-svc-vz6xv Jan 25 11:18:39.357: INFO: Got endpoints: latency-svc-vz6xv [3.574262828s] Jan 25 11:18:39.594: INFO: Created: latency-svc-85lx7 Jan 25 11:18:39.594: INFO: Got endpoints: latency-svc-85lx7 [3.633079177s] Jan 25 11:18:39.893: INFO: Created: latency-svc-js4tk Jan 25 11:18:40.120: INFO: Got endpoints: latency-svc-js4tk [3.887838652s] Jan 25 11:18:40.159: INFO: Created: latency-svc-hd9b4 Jan 25 11:18:40.164: INFO: Got endpoints: latency-svc-hd9b4 [3.660557297s] Jan 25 11:18:40.475: INFO: Created: latency-svc-ql8d4 Jan 25 11:18:40.475: INFO: Got endpoints: latency-svc-ql8d4 [3.779334382s] Jan 25 11:18:40.647: INFO: Created: latency-svc-fwmkf Jan 25 11:18:40.659: INFO: Got endpoints: latency-svc-fwmkf [3.732420593s] Jan 25 11:18:40.745: INFO: Created: latency-svc-mxkhg Jan 25 11:18:40.976: INFO: Got endpoints: latency-svc-mxkhg [3.858334536s] Jan 25 11:18:41.004: INFO: Created: latency-svc-vshz9 Jan 25 11:18:41.038: INFO: Got endpoints: latency-svc-vshz9 [3.616288787s] Jan 25 11:18:41.225: INFO: Created: latency-svc-5dv2z Jan 25 11:18:41.230: INFO: Got endpoints: latency-svc-5dv2z [3.559995577s] Jan 25 11:18:41.439: INFO: Created: latency-svc-8kwtr Jan 25 11:18:41.462: INFO: Got endpoints: latency-svc-8kwtr [3.391074283s] Jan 25 11:18:41.926: INFO: Created: latency-svc-2l6l4 Jan 25 11:18:41.959: INFO: Got endpoints: latency-svc-2l6l4 [3.79966137s] Jan 25 11:18:42.156: INFO: Created: latency-svc-l7229 Jan 25 11:18:42.210: INFO: Got endpoints: latency-svc-l7229 [3.3916403s] Jan 25 11:18:42.233: INFO: Created: latency-svc-knnlm Jan 25 11:18:42.418: INFO: Got endpoints: latency-svc-knnlm [3.540193462s] Jan 25 11:18:42.693: INFO: Created: latency-svc-mgf78 Jan 25 11:18:42.708: INFO: Got endpoints: latency-svc-mgf78 [3.665367089s] Jan 25 11:18:42.888: INFO: Created: latency-svc-ltgmf Jan 25 11:18:42.941: INFO: Got endpoints: latency-svc-ltgmf [3.658215928s] Jan 25 11:18:43.076: INFO: Created: latency-svc-jrlvc Jan 25 11:18:43.126: INFO: Got endpoints: latency-svc-jrlvc [3.767999658s] Jan 25 11:18:43.282: INFO: Created: latency-svc-t72z9 Jan 25 11:18:43.288: INFO: Got endpoints: latency-svc-t72z9 [3.694108435s] Jan 25 11:18:43.409: INFO: Created: latency-svc-6vhdg Jan 25 11:18:43.435: INFO: Got endpoints: latency-svc-6vhdg [3.313762306s] Jan 25 11:18:43.497: INFO: Created: latency-svc-2hbrd Jan 25 11:18:43.521: INFO: Got endpoints: latency-svc-2hbrd [3.356568076s] Jan 25 11:18:43.700: INFO: Created: latency-svc-42bmf Jan 25 11:18:43.831: INFO: Got endpoints: latency-svc-42bmf [3.356086064s] Jan 25 11:18:43.860: INFO: Created: latency-svc-zqmkm Jan 25 11:18:43.868: INFO: Got endpoints: latency-svc-zqmkm [3.209113435s] Jan 25 11:18:44.080: INFO: Created: latency-svc-nxbws Jan 25 11:18:44.120: INFO: Got endpoints: latency-svc-nxbws [3.143715415s] Jan 25 11:18:44.288: INFO: Created: latency-svc-w462w Jan 25 11:18:44.293: INFO: Got endpoints: latency-svc-w462w [3.254910166s] Jan 25 11:18:44.338: INFO: Created: latency-svc-xdvkm Jan 25 11:18:44.362: INFO: Got endpoints: latency-svc-xdvkm [3.132311508s] Jan 25 11:18:44.554: INFO: Created: latency-svc-kv4d7 Jan 25 11:18:44.567: INFO: Got endpoints: latency-svc-kv4d7 [3.105389829s] Jan 25 11:18:44.785: INFO: Created: latency-svc-bmz6p Jan 25 11:18:44.813: INFO: Got endpoints: latency-svc-bmz6p [2.853797343s] Jan 25 11:18:45.081: INFO: Created: latency-svc-mswnn Jan 25 11:18:45.145: INFO: Got endpoints: latency-svc-mswnn [2.934917971s] Jan 25 11:18:45.324: INFO: Created: latency-svc-bqm72 Jan 25 11:18:45.353: INFO: Got endpoints: latency-svc-bqm72 [2.934666344s] Jan 25 11:18:45.658: INFO: Created: latency-svc-cvwdg Jan 25 11:18:45.697: INFO: Got endpoints: latency-svc-cvwdg [2.988549584s] Jan 25 11:18:45.951: INFO: Created: latency-svc-drrnv Jan 25 11:18:46.210: INFO: Created: latency-svc-fqxqq Jan 25 11:18:46.234: INFO: Got endpoints: latency-svc-fqxqq [3.10707732s] Jan 25 11:18:46.234: INFO: Got endpoints: latency-svc-drrnv [3.292066482s] Jan 25 11:18:46.513: INFO: Created: latency-svc-kbqbr Jan 25 11:18:46.516: INFO: Got endpoints: latency-svc-kbqbr [3.228002698s] Jan 25 11:18:46.775: INFO: Created: latency-svc-f6h4h Jan 25 11:18:46.781: INFO: Got endpoints: latency-svc-f6h4h [3.3459255s] Jan 25 11:18:47.159: INFO: Created: latency-svc-hcc9x Jan 25 11:18:47.342: INFO: Got endpoints: latency-svc-hcc9x [3.82083489s] Jan 25 11:18:47.430: INFO: Created: latency-svc-g94t4 Jan 25 11:18:47.430: INFO: Got endpoints: latency-svc-g94t4 [3.598257495s] Jan 25 11:18:47.615: INFO: Created: latency-svc-xgg2l Jan 25 11:18:47.878: INFO: Got endpoints: latency-svc-xgg2l [4.009257863s] Jan 25 11:18:47.959: INFO: Created: latency-svc-sq9vc Jan 25 11:18:47.989: INFO: Got endpoints: latency-svc-sq9vc [3.868307683s] Jan 25 11:18:48.123: INFO: Created: latency-svc-x2xs4 Jan 25 11:18:48.145: INFO: Got endpoints: latency-svc-x2xs4 [3.851814731s] Jan 25 11:18:48.369: INFO: Created: latency-svc-jp6lt Jan 25 11:18:48.375: INFO: Got endpoints: latency-svc-jp6lt [4.012493799s] Jan 25 11:18:48.536: INFO: Created: latency-svc-xpcvp Jan 25 11:18:48.581: INFO: Got endpoints: latency-svc-xpcvp [4.013908107s] Jan 25 11:18:48.792: INFO: Created: latency-svc-p2mtp Jan 25 11:18:49.007: INFO: Got endpoints: latency-svc-p2mtp [4.193722027s] Jan 25 11:18:49.028: INFO: Created: latency-svc-mgxvg Jan 25 11:18:49.041: INFO: Got endpoints: latency-svc-mgxvg [3.895978193s] Jan 25 11:18:49.078: INFO: Created: latency-svc-snls7 Jan 25 11:18:49.194: INFO: Got endpoints: latency-svc-snls7 [3.840394418s] Jan 25 11:18:49.258: INFO: Created: latency-svc-pnbxs Jan 25 11:18:49.460: INFO: Got endpoints: latency-svc-pnbxs [3.762874975s] Jan 25 11:18:49.890: INFO: Created: latency-svc-2j6cx Jan 25 11:18:50.135: INFO: Got endpoints: latency-svc-2j6cx [3.901048538s] Jan 25 11:18:50.227: INFO: Created: latency-svc-frqkj Jan 25 11:18:50.272: INFO: Got endpoints: latency-svc-frqkj [4.037331446s] Jan 25 11:18:50.433: INFO: Created: latency-svc-n7qc7 Jan 25 11:18:50.456: INFO: Got endpoints: latency-svc-n7qc7 [3.939393871s] Jan 25 11:18:50.644: INFO: Created: latency-svc-r49l6 Jan 25 11:18:50.680: INFO: Got endpoints: latency-svc-r49l6 [3.898892185s] Jan 25 11:18:50.683: INFO: Created: latency-svc-bfbhm Jan 25 11:18:50.693: INFO: Got endpoints: latency-svc-bfbhm [3.350205511s] Jan 25 11:18:50.837: INFO: Created: latency-svc-j5f74 Jan 25 11:18:50.871: INFO: Got endpoints: latency-svc-j5f74 [3.441261552s] Jan 25 11:18:50.900: INFO: Created: latency-svc-zntqj Jan 25 11:18:50.918: INFO: Got endpoints: latency-svc-zntqj [3.039918925s] Jan 25 11:18:51.014: INFO: Created: latency-svc-cvhfp Jan 25 11:18:51.015: INFO: Got endpoints: latency-svc-cvhfp [3.026050789s] Jan 25 11:18:51.016: INFO: Latencies: [242.191855ms 420.815491ms 460.917369ms 921.251655ms 1.204518999s 1.409111818s 1.470876321s 1.669914966s 1.883217706s 2.105528149s 2.287100873s 2.3999277s 2.454611456s 2.488512313s 2.491108401s 2.50139026s 2.529765544s 2.537470322s 2.541432997s 2.541446031s 2.545253288s 2.550413347s 2.556168156s 2.557639541s 2.569497607s 2.608782954s 2.609507376s 2.618044512s 2.642045768s 2.648046164s 2.663558962s 2.663869464s 2.678305312s 2.679175609s 2.68178231s 2.689624958s 2.714904616s 2.734720556s 2.759888086s 2.761626569s 2.765889406s 2.781873169s 2.782063369s 2.782766388s 2.791326159s 2.794986596s 2.798947378s 2.813632858s 2.820355429s 2.829145163s 2.833264607s 2.853797343s 2.890538998s 2.906988163s 2.920452465s 2.934666344s 2.934917971s 2.945217798s 2.946324646s 2.95262904s 2.953098672s 2.977082361s 2.984522237s 2.988549584s 2.991627797s 3.015417328s 3.026050789s 3.026910845s 3.037589925s 3.039918925s 3.045940742s 3.077228151s 3.09726964s 3.105389829s 3.10707732s 3.11230317s 3.132311508s 3.143715415s 3.153394511s 3.153436336s 3.165412865s 3.17390554s 3.201190988s 3.204812483s 3.209113435s 3.228002698s 3.239047081s 3.240051351s 3.254910166s 3.278568823s 3.278864952s 3.292066482s 3.305981734s 3.306566926s 3.310505483s 3.311874164s 3.313762306s 3.320188s 3.334577917s 3.3459255s 3.350205511s 3.356086064s 3.356568076s 3.373796364s 3.389076216s 3.391074283s 3.3916403s 3.393258014s 3.419893684s 3.420111343s 3.441261552s 3.441466503s 3.496057829s 3.499502935s 3.509115025s 3.531455776s 3.533379693s 3.534538906s 3.540193462s 3.544233724s 3.545388642s 3.552612866s 3.55874336s 3.559995577s 3.574262828s 3.576056684s 3.578465773s 3.582861556s 3.595286293s 3.598257495s 3.598941614s 3.616288787s 3.632618998s 3.633079177s 3.658215928s 3.660557297s 3.663358742s 3.665367089s 3.687691195s 3.694108435s 3.732420593s 3.737006284s 3.762045828s 3.762874975s 3.767999658s 3.779334382s 3.79966137s 3.82083489s 3.840394418s 3.851814731s 3.858334536s 3.868307683s 3.880983445s 3.887838652s 3.895978193s 3.898892185s 3.900715727s 3.901048538s 3.921916768s 3.939393871s 3.993485394s 4.009257863s 4.012493799s 4.013908107s 4.017938318s 4.037331446s 4.098566189s 4.127876152s 4.175287696s 4.176545015s 4.178687124s 4.179105328s 4.193707906s 4.193722027s 4.210918315s 4.223442258s 4.243700188s 4.308604334s 4.311507955s 4.331487645s 4.345657098s 4.346576994s 4.388754451s 4.411342438s 4.411350598s 4.846937348s 4.896054621s 4.94358432s 4.964326189s 5.112448487s 5.168065197s 5.190053566s 5.257393324s 5.313233168s 5.333603922s 5.353106374s 5.359329696s 5.360060956s 5.495519713s 5.629715242s] Jan 25 11:18:51.016: INFO: 50 %ile: 3.350205511s Jan 25 11:18:51.016: INFO: 90 %ile: 4.345657098s Jan 25 11:18:51.016: INFO: 99 %ile: 5.495519713s Jan 25 11:18:51.016: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:18:51.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-8k9rm" for this suite. Jan 25 11:19:55.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:19:55.294: INFO: namespace: e2e-tests-svc-latency-8k9rm, resource: bindings, ignored listing per whitelist Jan 25 11:19:55.297: INFO: namespace e2e-tests-svc-latency-8k9rm deletion completed in 1m4.272925326s • [SLOW TEST:122.094 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:19:55.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:20:55.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6zh2f" for this suite. Jan 25 11:21:21.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:21:21.746: INFO: namespace: e2e-tests-container-probe-6zh2f, resource: bindings, ignored listing per whitelist Jan 25 11:21:21.746: INFO: namespace e2e-tests-container-probe-6zh2f deletion completed in 26.227398286s • [SLOW TEST:86.448 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:21:21.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 25 11:21:21.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-q699z' Jan 25 11:21:24.209: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 11:21:24.209: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 25 11:21:28.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-q699z' Jan 25 11:21:28.654: INFO: stderr: "" Jan 25 11:21:28.655: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:21:28.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q699z" for this suite. Jan 25 11:21:34.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:21:34.880: INFO: namespace: e2e-tests-kubectl-q699z, resource: bindings, ignored listing per whitelist Jan 25 11:21:35.099: INFO: namespace e2e-tests-kubectl-q699z deletion completed in 6.37075184s • [SLOW TEST:13.352 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:21:35.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 25 11:21:42.368: INFO: 10 pods remaining Jan 25 11:21:42.368: INFO: 10 pods has nil DeletionTimestamp Jan 25 11:21:42.368: INFO: Jan 25 11:21:44.374: INFO: 6 pods remaining Jan 25 11:21:44.374: INFO: 0 pods has nil DeletionTimestamp Jan 25 11:21:44.374: INFO: STEP: Gathering metrics W0125 11:21:45.135260 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 11:21:45.135: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:21:45.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9wbpx" for this suite. Jan 25 11:21:59.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:21:59.842: INFO: namespace: e2e-tests-gc-9wbpx, resource: bindings, ignored listing per whitelist Jan 25 11:21:59.865: INFO: namespace e2e-tests-gc-9wbpx deletion completed in 14.723250513s • [SLOW TEST:24.766 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:21:59.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-e7893657-3f64-11ea-8a8b-0242ac110006 STEP: Creating secret with name secret-projected-all-test-volume-e78935e0-3f64-11ea-8a8b-0242ac110006 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 25 11:22:00.315: INFO: Waiting up to 5m0s for pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-smb7q" to be "success or failure" Jan 25 11:22:00.384: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 68.117077ms Jan 25 11:22:02.409: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093420573s Jan 25 11:22:04.433: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117881295s Jan 25 11:22:06.476: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161021509s Jan 25 11:22:08.504: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188554313s Jan 25 11:22:10.603: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287235398s STEP: Saw pod success Jan 25 11:22:10.603: INFO: Pod "projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:22:10.618: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006 container projected-all-volume-test: STEP: delete the pod Jan 25 11:22:11.375: INFO: Waiting for pod projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:22:11.772: INFO: Pod projected-volume-e789345e-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:22:11.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-smb7q" for this suite. Jan 25 11:22:18.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:22:18.292: INFO: namespace: e2e-tests-projected-smb7q, resource: bindings, ignored listing per whitelist Jan 25 11:22:18.292: INFO: namespace e2e-tests-projected-smb7q deletion completed in 6.503630581s • [SLOW TEST:18.426 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:22:18.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 25 11:22:18.581: INFO: Waiting up to 5m0s for pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-8h9d8" to be "success or failure" Jan 25 11:22:18.665: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 83.296941ms Jan 25 11:22:20.738: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15608924s Jan 25 11:22:22.797: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215659051s Jan 25 11:22:25.313: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.731588645s Jan 25 11:22:27.612: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 9.030083856s Jan 25 11:22:29.631: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.049086537s STEP: Saw pod success Jan 25 11:22:29.631: INFO: Pod "pod-f2773f91-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:22:29.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f2773f91-3f64-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:22:29.768: INFO: Waiting for pod pod-f2773f91-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:22:29.808: INFO: Pod pod-f2773f91-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:22:29.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8h9d8" for this suite. Jan 25 11:22:35.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:22:35.942: INFO: namespace: e2e-tests-emptydir-8h9d8, resource: bindings, ignored listing per whitelist Jan 25 11:22:36.073: INFO: namespace e2e-tests-emptydir-8h9d8 deletion completed in 6.24783617s • [SLOW TEST:17.779 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:22:36.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-vfwvf/secret-test-fd3171a2-3f64-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:22:36.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-vfwvf" to be "success or failure" Jan 25 11:22:36.668: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 46.024937ms Jan 25 11:22:38.709: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087876772s Jan 25 11:22:40.733: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111213496s Jan 25 11:22:42.980: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358766085s Jan 25 11:22:45.075: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 8.453018132s Jan 25 11:22:47.085: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463259069s STEP: Saw pod success Jan 25 11:22:47.085: INFO: Pod "pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:22:47.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006 container env-test: STEP: delete the pod Jan 25 11:22:47.654: INFO: Waiting for pod pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006 to disappear Jan 25 11:22:48.001: INFO: Pod pod-configmaps-fd3914ae-3f64-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:22:48.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vfwvf" for this suite. Jan 25 11:22:56.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:22:56.140: INFO: namespace: e2e-tests-secrets-vfwvf, resource: bindings, ignored listing per whitelist Jan 25 11:22:56.188: INFO: namespace e2e-tests-secrets-vfwvf deletion completed in 8.173350573s • [SLOW TEST:20.115 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:22:56.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 25 11:22:56.647: INFO: Waiting up to 5m0s for pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-jgqv2" to be "success or failure" Jan 25 11:22:56.698: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 51.01473ms Jan 25 11:22:59.152: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505184772s Jan 25 11:23:01.168: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521136667s Jan 25 11:23:03.395: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74831372s Jan 25 11:23:05.482: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.835344986s Jan 25 11:23:07.500: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.8527029s Jan 25 11:23:09.516: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.869210744s STEP: Saw pod success Jan 25 11:23:09.516: INFO: Pod "pod-0929ce49-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:23:09.522: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0929ce49-3f65-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:23:09.657: INFO: Waiting for pod pod-0929ce49-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:23:09.667: INFO: Pod pod-0929ce49-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:23:09.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jgqv2" for this suite. Jan 25 11:23:15.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:23:15.754: INFO: namespace: e2e-tests-emptydir-jgqv2, resource: bindings, ignored listing per whitelist Jan 25 11:23:15.951: INFO: namespace e2e-tests-emptydir-jgqv2 deletion completed in 6.278515863s • [SLOW TEST:19.763 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:23:15.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jan 25 11:23:16.193: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:23:16.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pk2cr" for this suite. Jan 25 11:23:22.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:23:22.503: INFO: namespace: e2e-tests-kubectl-pk2cr, resource: bindings, ignored listing per whitelist Jan 25 11:23:22.620: INFO: namespace e2e-tests-kubectl-pk2cr deletion completed in 6.275144552s • [SLOW TEST:6.669 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:23:22.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:23:22.853: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 25 11:23:22.867: INFO: Number of nodes with available pods: 0 Jan 25 11:23:22.867: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 25 11:23:22.959: INFO: Number of nodes with available pods: 0 Jan 25 11:23:22.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:23.981: INFO: Number of nodes with available pods: 0 Jan 25 11:23:23.981: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:25.754: INFO: Number of nodes with available pods: 0 Jan 25 11:23:25.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:26.479: INFO: Number of nodes with available pods: 0 Jan 25 11:23:26.479: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:27.001: INFO: Number of nodes with available pods: 0 Jan 25 11:23:27.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:27.980: INFO: Number of nodes with available pods: 0 Jan 25 11:23:27.980: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:29.057: INFO: Number of nodes with available pods: 0 Jan 25 11:23:29.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:29.997: INFO: Number of nodes with available pods: 0 Jan 25 11:23:29.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:30.977: INFO: Number of nodes with available pods: 0 Jan 25 11:23:30.977: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:32.496: INFO: Number of nodes with available pods: 0 Jan 25 11:23:32.497: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:33.068: INFO: Number of nodes with available pods: 0 Jan 25 11:23:33.068: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:33.975: INFO: Number of nodes with available pods: 0 Jan 25 11:23:33.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:34.993: INFO: Number of nodes with available pods: 0 Jan 25 11:23:34.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:35.970: INFO: Number of nodes with available pods: 0 Jan 25 11:23:35.970: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:36.978: INFO: Number of nodes with available pods: 0 Jan 25 11:23:36.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:37.982: INFO: Number of nodes with available pods: 1 Jan 25 11:23:37.982: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 25 11:23:38.054: INFO: Number of nodes with available pods: 1 Jan 25 11:23:38.054: INFO: Number of running nodes: 0, number of available pods: 1 Jan 25 11:23:39.085: INFO: Number of nodes with available pods: 0 Jan 25 11:23:39.085: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 25 11:23:39.193: INFO: Number of nodes with available pods: 0 Jan 25 11:23:39.194: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:40.226: INFO: Number of nodes with available pods: 0 Jan 25 11:23:40.226: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:41.212: INFO: Number of nodes with available pods: 0 Jan 25 11:23:41.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:42.239: INFO: Number of nodes with available pods: 0 Jan 25 11:23:42.240: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:43.348: INFO: Number of nodes with available pods: 0 Jan 25 11:23:43.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:44.207: INFO: Number of nodes with available pods: 0 Jan 25 11:23:44.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:45.207: INFO: Number of nodes with available pods: 0 Jan 25 11:23:45.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:46.209: INFO: Number of nodes with available pods: 0 Jan 25 11:23:46.209: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:47.227: INFO: Number of nodes with available pods: 0 Jan 25 11:23:47.228: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:48.217: INFO: Number of nodes with available pods: 0 Jan 25 11:23:48.217: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:49.229: INFO: Number of nodes with available pods: 0 Jan 25 11:23:49.229: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:50.212: INFO: Number of nodes with available pods: 0 Jan 25 11:23:50.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:51.237: INFO: Number of nodes with available pods: 0 Jan 25 11:23:51.238: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:52.220: INFO: Number of nodes with available pods: 0 Jan 25 11:23:52.220: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:53.206: INFO: Number of nodes with available pods: 0 Jan 25 11:23:53.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:54.867: INFO: Number of nodes with available pods: 0 Jan 25 11:23:54.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:55.211: INFO: Number of nodes with available pods: 0 Jan 25 11:23:55.211: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:56.660: INFO: Number of nodes with available pods: 0 Jan 25 11:23:56.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:57.227: INFO: Number of nodes with available pods: 0 Jan 25 11:23:57.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:58.207: INFO: Number of nodes with available pods: 0 Jan 25 11:23:58.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:23:59.211: INFO: Number of nodes with available pods: 0 Jan 25 11:23:59.211: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:24:00.323: INFO: Number of nodes with available pods: 0 Jan 25 11:24:00.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:24:01.364: INFO: Number of nodes with available pods: 0 Jan 25 11:24:01.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:24:02.224: INFO: Number of nodes with available pods: 0 Jan 25 11:24:02.224: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:24:03.219: INFO: Number of nodes with available pods: 1 Jan 25 11:24:03.219: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qcrng, will wait for the garbage collector to delete the pods Jan 25 11:24:03.448: INFO: Deleting DaemonSet.extensions daemon-set took: 151.116091ms Jan 25 11:24:03.550: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.036389ms Jan 25 11:24:11.105: INFO: Number of nodes with available pods: 0 Jan 25 11:24:11.105: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 11:24:11.113: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qcrng/daemonsets","resourceVersion":"19404114"},"items":null} Jan 25 11:24:11.116: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qcrng/pods","resourceVersion":"19404114"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:24:11.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-qcrng" for this suite. Jan 25 11:24:17.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:24:17.279: INFO: namespace: e2e-tests-daemonsets-qcrng, resource: bindings, ignored listing per whitelist Jan 25 11:24:17.391: INFO: namespace e2e-tests-daemonsets-qcrng deletion completed in 6.217884197s • [SLOW TEST:54.770 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:24:17.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 25 11:24:17.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:18.005: INFO: stderr: "" Jan 25 11:24:18.006: INFO: stdout: "pod/pause created\n" Jan 25 11:24:18.006: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 25 11:24:18.006: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tfddx" to be "running and ready" Jan 25 11:24:18.027: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.284863ms Jan 25 11:24:20.039: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032894228s Jan 25 11:24:22.050: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044480805s Jan 25 11:24:24.692: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685692832s Jan 25 11:24:26.714: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708486249s Jan 25 11:24:28.725: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.719671544s Jan 25 11:24:28.726: INFO: Pod "pause" satisfied condition "running and ready" Jan 25 11:24:28.726: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 25 11:24:28.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:28.969: INFO: stderr: "" Jan 25 11:24:28.969: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 25 11:24:28.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:29.232: INFO: stderr: "" Jan 25 11:24:29.232: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 25 11:24:29.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:29.453: INFO: stderr: "" Jan 25 11:24:29.453: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 25 11:24:29.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:29.658: INFO: stderr: "" Jan 25 11:24:29.659: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 25 11:24:29.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:29.841: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 11:24:29.841: INFO: stdout: "pod \"pause\" force deleted\n" Jan 25 11:24:29.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tfddx' Jan 25 11:24:30.023: INFO: stderr: "No resources found.\n" Jan 25 11:24:30.023: INFO: stdout: "" Jan 25 11:24:30.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tfddx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 11:24:30.156: INFO: stderr: "" Jan 25 11:24:30.156: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:24:30.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tfddx" for this suite. Jan 25 11:24:38.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:24:38.152: INFO: namespace: e2e-tests-kubectl-tfddx, resource: bindings, ignored listing per whitelist Jan 25 11:24:38.252: INFO: namespace e2e-tests-kubectl-tfddx deletion completed in 7.275431728s • [SLOW TEST:20.861 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:24:38.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 25 11:24:38.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tfxq9' Jan 25 11:24:38.941: INFO: stderr: "" Jan 25 11:24:38.942: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jan 25 11:24:39.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-tfxq9' Jan 25 11:24:42.686: INFO: stderr: "" Jan 25 11:24:42.686: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:24:42.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tfxq9" for this suite. Jan 25 11:24:50.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:24:51.112: INFO: namespace: e2e-tests-kubectl-tfxq9, resource: bindings, ignored listing per whitelist Jan 25 11:24:51.143: INFO: namespace e2e-tests-kubectl-tfxq9 deletion completed in 8.446893484s • [SLOW TEST:12.890 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:24:51.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4d9f706d-3f65-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:24:51.740: INFO: Waiting up to 5m0s for pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-zl67h" to be "success or failure" Jan 25 11:24:51.751: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.01813ms Jan 25 11:24:54.599: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.859396534s Jan 25 11:24:56.628: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.888045735s Jan 25 11:24:58.642: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90220052s Jan 25 11:25:00.673: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.932924819s Jan 25 11:25:02.704: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964174987s Jan 25 11:25:04.736: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.99600667s STEP: Saw pod success Jan 25 11:25:04.736: INFO: Pod "pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:25:04.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006 container secret-volume-test: STEP: delete the pod Jan 25 11:25:04.827: INFO: Waiting for pod pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:25:04.898: INFO: Pod pod-secrets-4da025cb-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:25:04.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zl67h" for this suite. Jan 25 11:25:10.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:25:11.001: INFO: namespace: e2e-tests-secrets-zl67h, resource: bindings, ignored listing per whitelist Jan 25 11:25:11.115: INFO: namespace e2e-tests-secrets-zl67h deletion completed in 6.204018817s • [SLOW TEST:19.972 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:25:11.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 25 11:25:11.456: INFO: Waiting up to 5m0s for pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-var-expansion-7qqjc" to be "success or failure" Jan 25 11:25:11.477: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.938703ms Jan 25 11:25:14.279: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.823352582s Jan 25 11:25:16.306: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.850228421s Jan 25 11:25:18.354: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.898288195s Jan 25 11:25:20.916: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.460297333s Jan 25 11:25:23.047: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.591473399s Jan 25 11:25:25.067: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.611069274s Jan 25 11:25:27.076: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.620326153s STEP: Saw pod success Jan 25 11:25:27.076: INFO: Pod "var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:25:27.080: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006 container dapi-container: STEP: delete the pod Jan 25 11:25:27.156: INFO: Waiting for pod var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:25:27.262: INFO: Pod var-expansion-598203ff-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:25:27.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7qqjc" for this suite. Jan 25 11:25:35.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:25:35.617: INFO: namespace: e2e-tests-var-expansion-7qqjc, resource: bindings, ignored listing per whitelist Jan 25 11:25:35.658: INFO: namespace e2e-tests-var-expansion-7qqjc deletion completed in 8.373643699s • [SLOW TEST:24.542 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:25:35.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:25:52.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8tpq8" for this suite. Jan 25 11:25:59.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:25:59.401: INFO: namespace: e2e-tests-kubelet-test-8tpq8, resource: bindings, ignored listing per whitelist Jan 25 11:25:59.417: INFO: namespace e2e-tests-kubelet-test-8tpq8 deletion completed in 6.537197445s • [SLOW TEST:23.759 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:25:59.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 11:25:59.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-w9695" to be "success or failure" Jan 25 11:26:00.030: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 87.248506ms Jan 25 11:26:02.044: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101842888s Jan 25 11:26:04.277: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33454267s Jan 25 11:26:06.762: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819216563s Jan 25 11:26:08.776: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.833990409s Jan 25 11:26:11.017: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.074293334s Jan 25 11:26:13.056: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.113654832s STEP: Saw pod success Jan 25 11:26:13.056: INFO: Pod "downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:26:13.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 11:26:13.449: INFO: Waiting for pod downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:26:13.464: INFO: Pod downwardapi-volume-765b4c70-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:26:13.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w9695" for this suite. Jan 25 11:26:21.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:26:21.886: INFO: namespace: e2e-tests-projected-w9695, resource: bindings, ignored listing per whitelist Jan 25 11:26:21.896: INFO: namespace e2e-tests-projected-w9695 deletion completed in 8.407502355s • [SLOW TEST:22.478 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:26:21.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-83bdde76-3f65-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:26:22.301: INFO: Waiting up to 5m0s for pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-5wwq4" to be "success or failure" Jan 25 11:26:22.546: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 244.106209ms Jan 25 11:26:25.615: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.312977355s Jan 25 11:26:27.629: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.327616016s Jan 25 11:26:29.644: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.341765305s Jan 25 11:26:32.987: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685100741s Jan 25 11:26:35.258: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.956658664s Jan 25 11:26:37.274: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.971813362s Jan 25 11:26:39.287: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.98479488s STEP: Saw pod success Jan 25 11:26:39.287: INFO: Pod "pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:26:39.296: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006 container secret-volume-test: STEP: delete the pod Jan 25 11:26:40.716: INFO: Waiting for pod pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:26:40.733: INFO: Pod pod-secrets-83be96c2-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:26:40.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5wwq4" for this suite. Jan 25 11:26:49.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:26:49.594: INFO: namespace: e2e-tests-secrets-5wwq4, resource: bindings, ignored listing per whitelist Jan 25 11:26:49.879: INFO: namespace e2e-tests-secrets-5wwq4 deletion completed in 8.560781772s • [SLOW TEST:27.983 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:26:49.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 25 11:26:50.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:26:50.651: INFO: stderr: "" Jan 25 11:26:50.651: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 11:26:50.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:26:50.769: INFO: stderr: "" Jan 25 11:26:50.770: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 25 11:26:55.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:26:56.003: INFO: stderr: "" Jan 25 11:26:56.003: INFO: stdout: "update-demo-nautilus-5vwdx update-demo-nautilus-h6mxs " Jan 25 11:26:56.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:26:57.645: INFO: stderr: "" Jan 25 11:26:57.645: INFO: stdout: "" Jan 25 11:26:57.645: INFO: update-demo-nautilus-5vwdx is created but not running Jan 25 11:27:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:02.919: INFO: stderr: "" Jan 25 11:27:02.920: INFO: stdout: "update-demo-nautilus-5vwdx update-demo-nautilus-h6mxs " Jan 25 11:27:02.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:03.099: INFO: stderr: "" Jan 25 11:27:03.099: INFO: stdout: "" Jan 25 11:27:03.099: INFO: update-demo-nautilus-5vwdx is created but not running Jan 25 11:27:08.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:08.481: INFO: stderr: "" Jan 25 11:27:08.481: INFO: stdout: "update-demo-nautilus-5vwdx update-demo-nautilus-h6mxs " Jan 25 11:27:08.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:09.413: INFO: stderr: "" Jan 25 11:27:09.414: INFO: stdout: "" Jan 25 11:27:09.414: INFO: update-demo-nautilus-5vwdx is created but not running Jan 25 11:27:14.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:14.688: INFO: stderr: "" Jan 25 11:27:14.688: INFO: stdout: "update-demo-nautilus-5vwdx update-demo-nautilus-h6mxs " Jan 25 11:27:14.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:14.796: INFO: stderr: "" Jan 25 11:27:14.796: INFO: stdout: "true" Jan 25 11:27:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:14.944: INFO: stderr: "" Jan 25 11:27:14.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:27:14.945: INFO: validating pod update-demo-nautilus-5vwdx Jan 25 11:27:15.000: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:27:15.001: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:27:15.001: INFO: update-demo-nautilus-5vwdx is verified up and running Jan 25 11:27:15.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6mxs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:15.143: INFO: stderr: "" Jan 25 11:27:15.144: INFO: stdout: "" Jan 25 11:27:15.144: INFO: update-demo-nautilus-h6mxs is created but not running Jan 25 11:27:20.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:20.347: INFO: stderr: "" Jan 25 11:27:20.347: INFO: stdout: "update-demo-nautilus-5vwdx update-demo-nautilus-h6mxs " Jan 25 11:27:20.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:20.605: INFO: stderr: "" Jan 25 11:27:20.605: INFO: stdout: "true" Jan 25 11:27:20.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vwdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:20.724: INFO: stderr: "" Jan 25 11:27:20.724: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:27:20.724: INFO: validating pod update-demo-nautilus-5vwdx Jan 25 11:27:20.738: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:27:20.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:27:20.738: INFO: update-demo-nautilus-5vwdx is verified up and running Jan 25 11:27:20.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6mxs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:21.111: INFO: stderr: "" Jan 25 11:27:21.111: INFO: stdout: "true" Jan 25 11:27:21.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6mxs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:21.247: INFO: stderr: "" Jan 25 11:27:21.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:27:21.247: INFO: validating pod update-demo-nautilus-h6mxs Jan 25 11:27:21.258: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:27:21.258: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:27:21.258: INFO: update-demo-nautilus-h6mxs is verified up and running STEP: using delete to clean up resources Jan 25 11:27:21.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:21.391: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 11:27:21.391: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 25 11:27:21.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-bg6jl' Jan 25 11:27:21.638: INFO: stderr: "No resources found.\n" Jan 25 11:27:21.639: INFO: stdout: "" Jan 25 11:27:21.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-bg6jl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 11:27:22.624: INFO: stderr: "" Jan 25 11:27:22.624: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:27:22.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bg6jl" for this suite. Jan 25 11:27:49.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:27:49.673: INFO: namespace: e2e-tests-kubectl-bg6jl, resource: bindings, ignored listing per whitelist Jan 25 11:27:49.686: INFO: namespace e2e-tests-kubectl-bg6jl deletion completed in 27.044769292s • [SLOW TEST:59.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:27:49.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 25 11:27:49.882: INFO: Waiting up to 5m0s for pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-containers-6mn6s" to be "success or failure" Jan 25 11:27:49.974: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 91.441224ms Jan 25 11:27:51.992: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109878029s Jan 25 11:27:54.034: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151957782s Jan 25 11:27:56.685: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802407847s Jan 25 11:27:58.718: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.83570631s Jan 25 11:28:00.988: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.105281174s Jan 25 11:28:03.002: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.1198681s STEP: Saw pod success Jan 25 11:28:03.002: INFO: Pod "client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:28:03.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:28:04.856: INFO: Waiting for pod client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:28:04.873: INFO: Pod client-containers-b7edf58d-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:28:04.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6mn6s" for this suite. Jan 25 11:28:13.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:28:13.305: INFO: namespace: e2e-tests-containers-6mn6s, resource: bindings, ignored listing per whitelist Jan 25 11:28:13.462: INFO: namespace e2e-tests-containers-6mn6s deletion completed in 8.580176439s • [SLOW TEST:23.776 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:28:13.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b66ml.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b66ml.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 11:28:36.368: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.378: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.386: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.391: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.395: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.400: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.403: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.421: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.426: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.430: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.433: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.539: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.604: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.636: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.649: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.666: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.678: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.685: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.691: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.695: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006) Jan 25 11:28:36.695: INFO: Lookups using e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-b66ml.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 25 11:28:42.360: INFO: DNS probes using e2e-tests-dns-b66ml/dns-test-c636d32e-3f65-11ea-8a8b-0242ac110006 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:28:42.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-b66ml" for this suite. Jan 25 11:28:51.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:28:51.539: INFO: namespace: e2e-tests-dns-b66ml, resource: bindings, ignored listing per whitelist Jan 25 11:28:51.587: INFO: namespace e2e-tests-dns-b66ml deletion completed in 9.026142698s • [SLOW TEST:38.124 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:28:51.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 25 11:28:51.918: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 11:28:52.028: INFO: Waiting for terminating namespaces to be deleted... Jan 25 11:28:52.035: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 25 11:28:52.058: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 11:28:52.058: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 25 11:28:52.058: INFO: Container coredns ready: true, restart count 0 Jan 25 11:28:52.058: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 25 11:28:52.058: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 11:28:52.058: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 11:28:52.058: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 25 11:28:52.058: INFO: Container weave ready: true, restart count 0 Jan 25 11:28:52.058: INFO: Container weave-npc ready: true, restart count 0 Jan 25 11:28:52.058: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 25 11:28:52.058: INFO: Container coredns ready: true, restart count 0 Jan 25 11:28:52.058: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 25 11:28:52.058: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 25 11:28:52.317: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 25 11:28:52.318: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-dd2d0464-3f65-11ea-8a8b-0242ac110006.15ed1dbaca867a11], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-j66dp/filler-pod-dd2d0464-3f65-11ea-8a8b-0242ac110006 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd2d0464-3f65-11ea-8a8b-0242ac110006.15ed1dbbd4314086], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd2d0464-3f65-11ea-8a8b-0242ac110006.15ed1dbc60dd7932], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd2d0464-3f65-11ea-8a8b-0242ac110006.15ed1dbc9e5ad370], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ed1dbd22aa931b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:29:03.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-j66dp" for this suite. Jan 25 11:29:13.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:29:15.647: INFO: namespace: e2e-tests-sched-pred-j66dp, resource: bindings, ignored listing per whitelist Jan 25 11:29:15.717: INFO: namespace e2e-tests-sched-pred-j66dp deletion completed in 11.956683231s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:24.130 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:29:15.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 25 11:29:16.317: INFO: Waiting up to 5m0s for pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-4thn5" to be "success or failure" Jan 25 11:29:16.546: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 229.043923ms Jan 25 11:29:19.249: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.932178322s Jan 25 11:29:21.266: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.949089566s Jan 25 11:29:23.828: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.510234349s Jan 25 11:29:25.853: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535885354s Jan 25 11:29:27.871: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.553640129s Jan 25 11:29:29.965: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.647813138s Jan 25 11:29:32.008: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.690944905s STEP: Saw pod success Jan 25 11:29:32.008: INFO: Pod "pod-eb70602a-3f65-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:29:32.014: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eb70602a-3f65-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:29:32.094: INFO: Waiting for pod pod-eb70602a-3f65-11ea-8a8b-0242ac110006 to disappear Jan 25 11:29:32.149: INFO: Pod pod-eb70602a-3f65-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:29:32.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4thn5" for this suite. Jan 25 11:29:40.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:29:40.654: INFO: namespace: e2e-tests-emptydir-4thn5, resource: bindings, ignored listing per whitelist Jan 25 11:29:40.670: INFO: namespace e2e-tests-emptydir-4thn5 deletion completed in 8.500296515s • [SLOW TEST:24.952 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:29:40.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 25 11:29:41.123: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:30:02.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9vm2s" for this suite. Jan 25 11:30:11.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:30:11.083: INFO: namespace: e2e-tests-init-container-9vm2s, resource: bindings, ignored listing per whitelist Jan 25 11:30:11.234: INFO: namespace e2e-tests-init-container-9vm2s deletion completed in 8.313491333s • [SLOW TEST:30.563 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:30:11.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jan 25 11:30:11.732: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix205506494/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:30:11.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xjmvp" for this suite. Jan 25 11:30:20.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:30:20.657: INFO: namespace: e2e-tests-kubectl-xjmvp, resource: bindings, ignored listing per whitelist Jan 25 11:30:20.672: INFO: namespace e2e-tests-kubectl-xjmvp deletion completed in 8.769730412s • [SLOW TEST:9.437 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:30:20.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-12148463-3f66-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:30:21.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-xc7q2" to be "success or failure" Jan 25 11:30:21.286: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 21.659121ms Jan 25 11:30:24.056: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791387338s Jan 25 11:30:26.106: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.841742038s Jan 25 11:30:28.123: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858989601s Jan 25 11:30:31.119: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.855010073s Jan 25 11:30:33.147: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.882459191s Jan 25 11:30:35.158: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.893781058s Jan 25 11:30:37.170: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.905662225s STEP: Saw pod success Jan 25 11:30:37.170: INFO: Pod "pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:30:37.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jan 25 11:30:37.227: INFO: Waiting for pod pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006 to disappear Jan 25 11:30:37.233: INFO: Pod pod-projected-configmaps-121783fb-3f66-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:30:37.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xc7q2" for this suite. Jan 25 11:30:45.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:30:45.330: INFO: namespace: e2e-tests-projected-xc7q2, resource: bindings, ignored listing per whitelist Jan 25 11:30:45.409: INFO: namespace e2e-tests-projected-xc7q2 deletion completed in 8.169535104s • [SLOW TEST:24.737 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:30:45.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:30:45.907: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 25 11:30:45.972: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 25 11:30:51.027: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 25 11:31:01.053: INFO: Creating deployment "test-rolling-update-deployment" Jan 25 11:31:01.071: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 25 11:31:01.093: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 25 11:31:03.172: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 25 11:31:03.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 11:31:05.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 11:31:07.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 11:31:09.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548661, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 11:31:11.224: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 25 11:31:11.286: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-tdttj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tdttj/deployments/test-rolling-update-deployment,UID:29e93a35-3f66-11ea-a994-fa163e34d433,ResourceVersion:19405035,Generation:1,CreationTimestamp:2020-01-25 11:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-25 11:31:01 +0000 UTC 2020-01-25 11:31:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-25 11:31:11 +0000 UTC 2020-01-25 11:31:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 25 11:31:11.293: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-tdttj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tdttj/replicasets/test-rolling-update-deployment-75db98fb4c,UID:29f2b0e3-3f66-11ea-a994-fa163e34d433,ResourceVersion:19405026,Generation:1,CreationTimestamp:2020-01-25 11:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 29e93a35-3f66-11ea-a994-fa163e34d433 0xc001951827 0xc001951828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 25 11:31:11.293: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 25 11:31:11.293: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-tdttj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tdttj/replicasets/test-rolling-update-controller,UID:20e2118e-3f66-11ea-a994-fa163e34d433,ResourceVersion:19405034,Generation:2,CreationTimestamp:2020-01-25 11:30:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 29e93a35-3f66-11ea-a994-fa163e34d433 0xc001951767 0xc001951768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 25 11:31:11.301: INFO: Pod "test-rolling-update-deployment-75db98fb4c-mkm4m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-mkm4m,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-tdttj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tdttj/pods/test-rolling-update-deployment-75db98fb4c-mkm4m,UID:2a03da88-3f66-11ea-a994-fa163e34d433,ResourceVersion:19405025,Generation:0,CreationTimestamp:2020-01-25 11:31:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 29f2b0e3-3f66-11ea-a994-fa163e34d433 0xc00133fc67 0xc00133fc68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ndwj5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ndwj5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ndwj5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00133fdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00133fe50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 11:31:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 11:31:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 11:31:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 11:31:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-25 11:31:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-25 11:31:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://66344e21a4646a5c499c14ca10738ea8d2648dba9a0f356646b7dace69bc2ede}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:31:11.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-tdttj" for this suite. Jan 25 11:31:19.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:31:19.476: INFO: namespace: e2e-tests-deployment-tdttj, resource: bindings, ignored listing per whitelist Jan 25 11:31:19.589: INFO: namespace e2e-tests-deployment-tdttj deletion completed in 8.280888048s • [SLOW TEST:34.179 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:31:19.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 25 11:31:23.030: INFO: Waiting up to 5m0s for pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-5qp4q" to be "success or failure" Jan 25 11:31:23.064: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 33.540187ms Jan 25 11:31:25.092: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061045351s Jan 25 11:31:27.111: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079931168s Jan 25 11:31:29.137: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106183194s Jan 25 11:31:32.064: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.033419385s Jan 25 11:31:34.088: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.056619559s Jan 25 11:31:36.100: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.069367289s Jan 25 11:31:38.320: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.288773995s STEP: Saw pod success Jan 25 11:31:38.320: INFO: Pod "pod-3664f7b1-3f66-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:31:38.343: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3664f7b1-3f66-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:31:38.758: INFO: Waiting for pod pod-3664f7b1-3f66-11ea-8a8b-0242ac110006 to disappear Jan 25 11:31:38.796: INFO: Pod pod-3664f7b1-3f66-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:31:38.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5qp4q" for this suite. Jan 25 11:31:44.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:31:44.942: INFO: namespace: e2e-tests-emptydir-5qp4q, resource: bindings, ignored listing per whitelist Jan 25 11:31:45.043: INFO: namespace e2e-tests-emptydir-5qp4q deletion completed in 6.239999064s • [SLOW TEST:25.454 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:31:45.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-44440c09-3f66-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:31:45.300: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-59c8f" to be "success or failure" Jan 25 11:31:45.420: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 119.322253ms Jan 25 11:31:47.476: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175391183s Jan 25 11:31:49.489: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189059926s Jan 25 11:31:51.505: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204391864s Jan 25 11:31:53.516: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.215892011s Jan 25 11:31:55.533: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232657304s STEP: Saw pod success Jan 25 11:31:55.533: INFO: Pod "pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:31:55.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: STEP: delete the pod Jan 25 11:31:56.462: INFO: Waiting for pod pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006 to disappear Jan 25 11:31:56.774: INFO: Pod pod-projected-configmaps-4445d085-3f66-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:31:56.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-59c8f" for this suite. Jan 25 11:32:02.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:32:02.943: INFO: namespace: e2e-tests-projected-59c8f, resource: bindings, ignored listing per whitelist Jan 25 11:32:02.960: INFO: namespace e2e-tests-projected-59c8f deletion completed in 6.169880872s • [SLOW TEST:17.916 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:32:02.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-4f0c390c-3f66-11ea-8a8b-0242ac110006 STEP: Creating configMap with name cm-test-opt-upd-4f0c3a82-3f66-11ea-8a8b-0242ac110006 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4f0c390c-3f66-11ea-8a8b-0242ac110006 STEP: Updating configmap cm-test-opt-upd-4f0c3a82-3f66-11ea-8a8b-0242ac110006 STEP: Creating configMap with name cm-test-opt-create-4f0c3ad6-3f66-11ea-8a8b-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:33:35.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fsxtn" for this suite. Jan 25 11:34:01.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:34:01.543: INFO: namespace: e2e-tests-projected-fsxtn, resource: bindings, ignored listing per whitelist Jan 25 11:34:01.603: INFO: namespace e2e-tests-projected-fsxtn deletion completed in 26.38714396s • [SLOW TEST:118.643 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:34:01.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:34:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8jgn2" for this suite. Jan 25 11:34:08.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:34:08.339: INFO: namespace: e2e-tests-services-8jgn2, resource: bindings, ignored listing per whitelist Jan 25 11:34:08.360: INFO: namespace e2e-tests-services-8jgn2 deletion completed in 6.320319662s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.757 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:34:08.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 11:34:08.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-5xvln" to be "success or failure" Jan 25 11:34:09.166: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 265.078507ms Jan 25 11:34:12.984: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083671459s Jan 25 11:34:15.008: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107634996s Jan 25 11:34:18.360: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.459267879s Jan 25 11:34:20.384: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.483648072s Jan 25 11:34:22.439: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.538488235s Jan 25 11:34:24.470: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.569569411s Jan 25 11:34:27.263: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.362574654s STEP: Saw pod success Jan 25 11:34:27.264: INFO: Pod "downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:34:27.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 11:34:27.732: INFO: Waiting for pod downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006 to disappear Jan 25 11:34:27.755: INFO: Pod downwardapi-volume-99dada70-3f66-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:34:27.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5xvln" for this suite. Jan 25 11:34:33.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:34:34.353: INFO: namespace: e2e-tests-downward-api-5xvln, resource: bindings, ignored listing per whitelist Jan 25 11:34:34.603: INFO: namespace e2e-tests-downward-api-5xvln deletion completed in 6.838356146s • [SLOW TEST:26.242 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:34:34.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 25 11:34:49.440: INFO: Successfully updated pod "annotationupdatea946d985-3f66-11ea-8a8b-0242ac110006" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:34:51.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dvjmq" for this suite. Jan 25 11:35:17.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:35:17.695: INFO: namespace: e2e-tests-downward-api-dvjmq, resource: bindings, ignored listing per whitelist Jan 25 11:35:17.833: INFO: namespace e2e-tests-downward-api-dvjmq deletion completed in 26.256227978s • [SLOW TEST:43.229 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:35:17.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:35:18.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dnfzm" for this suite. Jan 25 11:35:43.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:35:43.843: INFO: namespace: e2e-tests-pods-dnfzm, resource: bindings, ignored listing per whitelist Jan 25 11:35:43.858: INFO: namespace e2e-tests-pods-dnfzm deletion completed in 24.91900775s • [SLOW TEST:26.025 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:35:43.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jan 25 11:35:44.232: INFO: Waiting up to 5m0s for pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-wm5fw" to be "success or failure" Jan 25 11:35:44.248: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.51462ms Jan 25 11:35:46.272: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039761034s Jan 25 11:35:48.288: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055922502s Jan 25 11:35:50.675: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442728931s Jan 25 11:35:52.688: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455788793s Jan 25 11:35:54.717: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.485239747s STEP: Saw pod success Jan 25 11:35:54.718: INFO: Pod "pod-d2a82764-3f66-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:35:54.733: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d2a82764-3f66-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:35:54.995: INFO: Waiting for pod pod-d2a82764-3f66-11ea-8a8b-0242ac110006 to disappear Jan 25 11:35:55.007: INFO: Pod pod-d2a82764-3f66-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:35:55.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wm5fw" for this suite. Jan 25 11:36:01.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:36:01.139: INFO: namespace: e2e-tests-emptydir-wm5fw, resource: bindings, ignored listing per whitelist Jan 25 11:36:01.233: INFO: namespace e2e-tests-emptydir-wm5fw deletion completed in 6.213812025s • [SLOW TEST:17.373 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:36:01.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:36:01.380: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:36:11.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8zxth" for this suite. Jan 25 11:37:05.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:37:05.729: INFO: namespace: e2e-tests-pods-8zxth, resource: bindings, ignored listing per whitelist Jan 25 11:37:05.807: INFO: namespace e2e-tests-pods-8zxth deletion completed in 54.200183929s • [SLOW TEST:64.575 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:37:05.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-03a13338-3f67-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:37:06.513: INFO: Waiting up to 5m0s for pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-5zxfk" to be "success or failure" Jan 25 11:37:06.527: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.92317ms Jan 25 11:37:09.354: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841201523s Jan 25 11:37:11.371: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857860587s Jan 25 11:37:13.396: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.883065904s Jan 25 11:37:16.176: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.66278031s Jan 25 11:37:18.562: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0489412s Jan 25 11:37:20.836: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.322848415s Jan 25 11:37:22.959: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.446488216s STEP: Saw pod success Jan 25 11:37:22.960: INFO: Pod "pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:37:23.029: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:37:23.832: INFO: Waiting for pod pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006 to disappear Jan 25 11:37:23.917: INFO: Pod pod-configmaps-03a3ffc6-3f67-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:37:23.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5zxfk" for this suite. Jan 25 11:37:30.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:37:30.296: INFO: namespace: e2e-tests-configmap-5zxfk, resource: bindings, ignored listing per whitelist Jan 25 11:37:30.409: INFO: namespace e2e-tests-configmap-5zxfk deletion completed in 6.281779792s • [SLOW TEST:24.601 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:37:30.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 25 11:37:30.669: INFO: Waiting up to 5m0s for pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-78qkg" to be "success or failure" Jan 25 11:37:30.677: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.722808ms Jan 25 11:37:32.787: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117859822s Jan 25 11:37:34.823: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153969065s Jan 25 11:37:37.006: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336773914s Jan 25 11:37:39.025: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355495952s Jan 25 11:37:41.035: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.366263577s STEP: Saw pod success Jan 25 11:37:41.036: INFO: Pod "pod-12209f1e-3f67-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:37:41.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-12209f1e-3f67-11ea-8a8b-0242ac110006 container test-container: STEP: delete the pod Jan 25 11:37:41.667: INFO: Waiting for pod pod-12209f1e-3f67-11ea-8a8b-0242ac110006 to disappear Jan 25 11:37:41.729: INFO: Pod pod-12209f1e-3f67-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:37:41.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-78qkg" for this suite. Jan 25 11:37:48.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:37:48.235: INFO: namespace: e2e-tests-emptydir-78qkg, resource: bindings, ignored listing per whitelist Jan 25 11:37:48.261: INFO: namespace e2e-tests-emptydir-78qkg deletion completed in 6.518312976s • [SLOW TEST:17.853 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:37:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-4scp STEP: Creating a pod to test atomic-volume-subpath Jan 25 11:37:48.625: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4scp" in namespace "e2e-tests-subpath-cpwpn" to be "success or failure" Jan 25 11:37:48.640: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 13.98712ms Jan 25 11:37:50.828: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202621307s Jan 25 11:37:52.855: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229437848s Jan 25 11:37:55.896: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 7.270115231s Jan 25 11:37:58.156: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 9.530600534s Jan 25 11:38:00.171: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 11.545749408s Jan 25 11:38:02.242: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 13.616250356s Jan 25 11:38:04.283: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.657249782s Jan 25 11:38:06.296: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Pending", Reason="", readiness=false. Elapsed: 17.670770548s Jan 25 11:38:08.315: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 19.689537807s Jan 25 11:38:10.334: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 21.708019666s Jan 25 11:38:12.353: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 23.727861014s Jan 25 11:38:14.388: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 25.762297829s Jan 25 11:38:16.422: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 27.79598834s Jan 25 11:38:18.448: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 29.82226008s Jan 25 11:38:20.497: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 31.871662803s Jan 25 11:38:22.560: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 33.933998494s Jan 25 11:38:24.617: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 35.99129364s Jan 25 11:38:26.636: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Running", Reason="", readiness=false. Elapsed: 38.010304263s Jan 25 11:38:29.455: INFO: Pod "pod-subpath-test-configmap-4scp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.829807623s STEP: Saw pod success Jan 25 11:38:29.456: INFO: Pod "pod-subpath-test-configmap-4scp" satisfied condition "success or failure" Jan 25 11:38:29.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-4scp container test-container-subpath-configmap-4scp: STEP: delete the pod Jan 25 11:38:30.473: INFO: Waiting for pod pod-subpath-test-configmap-4scp to disappear Jan 25 11:38:30.507: INFO: Pod pod-subpath-test-configmap-4scp no longer exists STEP: Deleting pod pod-subpath-test-configmap-4scp Jan 25 11:38:30.508: INFO: Deleting pod "pod-subpath-test-configmap-4scp" in namespace "e2e-tests-subpath-cpwpn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:38:30.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-cpwpn" for this suite. Jan 25 11:38:38.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:38:38.818: INFO: namespace: e2e-tests-subpath-cpwpn, resource: bindings, ignored listing per whitelist Jan 25 11:38:38.897: INFO: namespace e2e-tests-subpath-cpwpn deletion completed in 8.197878104s • [SLOW TEST:50.635 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:38:38.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 25 11:38:56.083: INFO: Successfully updated pod "labelsupdate3b1a61f4-3f67-11ea-8a8b-0242ac110006" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:38:58.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t4c7j" for this suite. Jan 25 11:39:38.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:39:38.517: INFO: namespace: e2e-tests-downward-api-t4c7j, resource: bindings, ignored listing per whitelist Jan 25 11:39:38.634: INFO: namespace e2e-tests-downward-api-t4c7j deletion completed in 40.346057443s • [SLOW TEST:59.736 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:39:38.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 25 11:40:07.220: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:07.220: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:07.313309 8 log.go:172] (0xc001120000) (0xc0010b9680) Create stream I0125 11:40:07.313493 8 log.go:172] (0xc001120000) (0xc0010b9680) Stream added, broadcasting: 1 I0125 11:40:07.322127 8 log.go:172] (0xc001120000) Reply frame received for 1 I0125 11:40:07.322220 8 log.go:172] (0xc001120000) (0xc0010b97c0) Create stream I0125 11:40:07.322239 8 log.go:172] (0xc001120000) (0xc0010b97c0) Stream added, broadcasting: 3 I0125 11:40:07.324134 8 log.go:172] (0xc001120000) Reply frame received for 3 I0125 11:40:07.324252 8 log.go:172] (0xc001120000) (0xc0017e1180) Create stream I0125 11:40:07.324284 8 log.go:172] (0xc001120000) (0xc0017e1180) Stream added, broadcasting: 5 I0125 11:40:07.325945 8 log.go:172] (0xc001120000) Reply frame received for 5 I0125 11:40:07.573748 8 log.go:172] (0xc001120000) Data frame received for 3 I0125 11:40:07.573860 8 log.go:172] (0xc0010b97c0) (3) Data frame handling I0125 11:40:07.573896 8 log.go:172] (0xc0010b97c0) (3) Data frame sent I0125 11:40:07.714847 8 log.go:172] (0xc001120000) Data frame received for 1 I0125 11:40:07.714924 8 log.go:172] (0xc0010b9680) (1) Data frame handling I0125 11:40:07.714944 8 log.go:172] (0xc0010b9680) (1) Data frame sent I0125 11:40:07.714967 8 log.go:172] (0xc001120000) (0xc0010b9680) Stream removed, broadcasting: 1 I0125 11:40:07.715225 8 log.go:172] (0xc001120000) (0xc0010b97c0) Stream removed, broadcasting: 3 I0125 11:40:07.715669 8 log.go:172] (0xc001120000) (0xc0017e1180) Stream removed, broadcasting: 5 I0125 11:40:07.715772 8 log.go:172] (0xc001120000) (0xc0010b9680) Stream removed, broadcasting: 1 I0125 11:40:07.715792 8 log.go:172] (0xc001120000) (0xc0010b97c0) Stream removed, broadcasting: 3 I0125 11:40:07.715801 8 log.go:172] (0xc001120000) (0xc0017e1180) Stream removed, broadcasting: 5 I0125 11:40:07.716209 8 log.go:172] (0xc001120000) Go away received Jan 25 11:40:07.716: INFO: Exec stderr: "" Jan 25 11:40:07.716: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:07.716: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:08.016585 8 log.go:172] (0xc0018f02c0) (0xc001ca70e0) Create stream I0125 11:40:08.016716 8 log.go:172] (0xc0018f02c0) (0xc001ca70e0) Stream added, broadcasting: 1 I0125 11:40:08.021793 8 log.go:172] (0xc0018f02c0) Reply frame received for 1 I0125 11:40:08.021820 8 log.go:172] (0xc0018f02c0) (0xc001d6d0e0) Create stream I0125 11:40:08.021828 8 log.go:172] (0xc0018f02c0) (0xc001d6d0e0) Stream added, broadcasting: 3 I0125 11:40:08.023654 8 log.go:172] (0xc0018f02c0) Reply frame received for 3 I0125 11:40:08.023701 8 log.go:172] (0xc0018f02c0) (0xc0017e1220) Create stream I0125 11:40:08.023716 8 log.go:172] (0xc0018f02c0) (0xc0017e1220) Stream added, broadcasting: 5 I0125 11:40:08.027110 8 log.go:172] (0xc0018f02c0) Reply frame received for 5 I0125 11:40:08.176820 8 log.go:172] (0xc0018f02c0) Data frame received for 3 I0125 11:40:08.177027 8 log.go:172] (0xc001d6d0e0) (3) Data frame handling I0125 11:40:08.177074 8 log.go:172] (0xc001d6d0e0) (3) Data frame sent I0125 11:40:08.326806 8 log.go:172] (0xc0018f02c0) Data frame received for 1 I0125 11:40:08.327070 8 log.go:172] (0xc0018f02c0) (0xc001d6d0e0) Stream removed, broadcasting: 3 I0125 11:40:08.327189 8 log.go:172] (0xc001ca70e0) (1) Data frame handling I0125 11:40:08.327237 8 log.go:172] (0xc001ca70e0) (1) Data frame sent I0125 11:40:08.327258 8 log.go:172] (0xc0018f02c0) (0xc0017e1220) Stream removed, broadcasting: 5 I0125 11:40:08.327346 8 log.go:172] (0xc0018f02c0) (0xc001ca70e0) Stream removed, broadcasting: 1 I0125 11:40:08.327390 8 log.go:172] (0xc0018f02c0) Go away received I0125 11:40:08.327722 8 log.go:172] (0xc0018f02c0) (0xc001ca70e0) Stream removed, broadcasting: 1 I0125 11:40:08.327745 8 log.go:172] (0xc0018f02c0) (0xc001d6d0e0) Stream removed, broadcasting: 3 I0125 11:40:08.327758 8 log.go:172] (0xc0018f02c0) (0xc0017e1220) Stream removed, broadcasting: 5 Jan 25 11:40:08.327: INFO: Exec stderr: "" Jan 25 11:40:08.328: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:08.328: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:08.408944 8 log.go:172] (0xc001b184d0) (0xc001d6d360) Create stream I0125 11:40:08.409341 8 log.go:172] (0xc001b184d0) (0xc001d6d360) Stream added, broadcasting: 1 I0125 11:40:08.421951 8 log.go:172] (0xc001b184d0) Reply frame received for 1 I0125 11:40:08.422140 8 log.go:172] (0xc001b184d0) (0xc0010b9a40) Create stream I0125 11:40:08.422167 8 log.go:172] (0xc001b184d0) (0xc0010b9a40) Stream added, broadcasting: 3 I0125 11:40:08.424550 8 log.go:172] (0xc001b184d0) Reply frame received for 3 I0125 11:40:08.424580 8 log.go:172] (0xc001b184d0) (0xc0010b9b80) Create stream I0125 11:40:08.424594 8 log.go:172] (0xc001b184d0) (0xc0010b9b80) Stream added, broadcasting: 5 I0125 11:40:08.425697 8 log.go:172] (0xc001b184d0) Reply frame received for 5 I0125 11:40:08.856768 8 log.go:172] (0xc001b184d0) Data frame received for 3 I0125 11:40:08.856906 8 log.go:172] (0xc0010b9a40) (3) Data frame handling I0125 11:40:08.856943 8 log.go:172] (0xc0010b9a40) (3) Data frame sent I0125 11:40:09.041380 8 log.go:172] (0xc001b184d0) (0xc0010b9a40) Stream removed, broadcasting: 3 I0125 11:40:09.041451 8 log.go:172] (0xc001b184d0) Data frame received for 1 I0125 11:40:09.041473 8 log.go:172] (0xc001d6d360) (1) Data frame handling I0125 11:40:09.041494 8 log.go:172] (0xc001d6d360) (1) Data frame sent I0125 11:40:09.041507 8 log.go:172] (0xc001b184d0) (0xc001d6d360) Stream removed, broadcasting: 1 I0125 11:40:09.041544 8 log.go:172] (0xc001b184d0) (0xc0010b9b80) Stream removed, broadcasting: 5 I0125 11:40:09.041617 8 log.go:172] (0xc001b184d0) Go away received I0125 11:40:09.041635 8 log.go:172] (0xc001b184d0) (0xc001d6d360) Stream removed, broadcasting: 1 I0125 11:40:09.041642 8 log.go:172] (0xc001b184d0) (0xc0010b9a40) Stream removed, broadcasting: 3 I0125 11:40:09.041654 8 log.go:172] (0xc001b184d0) (0xc0010b9b80) Stream removed, broadcasting: 5 Jan 25 11:40:09.041: INFO: Exec stderr: "" Jan 25 11:40:09.041: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:09.041: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:09.168034 8 log.go:172] (0xc000937760) (0xc0017e14a0) Create stream I0125 11:40:09.168203 8 log.go:172] (0xc000937760) (0xc0017e14a0) Stream added, broadcasting: 1 I0125 11:40:09.204777 8 log.go:172] (0xc000937760) Reply frame received for 1 I0125 11:40:09.204980 8 log.go:172] (0xc000937760) (0xc0017e1540) Create stream I0125 11:40:09.205013 8 log.go:172] (0xc000937760) (0xc0017e1540) Stream added, broadcasting: 3 I0125 11:40:09.206576 8 log.go:172] (0xc000937760) Reply frame received for 3 I0125 11:40:09.206602 8 log.go:172] (0xc000937760) (0xc0010b9cc0) Create stream I0125 11:40:09.206615 8 log.go:172] (0xc000937760) (0xc0010b9cc0) Stream added, broadcasting: 5 I0125 11:40:09.207505 8 log.go:172] (0xc000937760) Reply frame received for 5 I0125 11:40:09.298960 8 log.go:172] (0xc000937760) Data frame received for 3 I0125 11:40:09.299077 8 log.go:172] (0xc0017e1540) (3) Data frame handling I0125 11:40:09.299100 8 log.go:172] (0xc0017e1540) (3) Data frame sent I0125 11:40:09.399563 8 log.go:172] (0xc000937760) (0xc0017e1540) Stream removed, broadcasting: 3 I0125 11:40:09.399732 8 log.go:172] (0xc000937760) Data frame received for 1 I0125 11:40:09.399769 8 log.go:172] (0xc0017e14a0) (1) Data frame handling I0125 11:40:09.399800 8 log.go:172] (0xc0017e14a0) (1) Data frame sent I0125 11:40:09.399816 8 log.go:172] (0xc000937760) (0xc0017e14a0) Stream removed, broadcasting: 1 I0125 11:40:09.399906 8 log.go:172] (0xc000937760) (0xc0010b9cc0) Stream removed, broadcasting: 5 I0125 11:40:09.400025 8 log.go:172] (0xc000937760) (0xc0017e14a0) Stream removed, broadcasting: 1 I0125 11:40:09.400048 8 log.go:172] (0xc000937760) (0xc0017e1540) Stream removed, broadcasting: 3 I0125 11:40:09.400060 8 log.go:172] (0xc000937760) (0xc0010b9cc0) Stream removed, broadcasting: 5 I0125 11:40:09.400154 8 log.go:172] (0xc000937760) Go away received Jan 25 11:40:09.400: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 25 11:40:09.400: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:09.400: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:09.475517 8 log.go:172] (0xc000937c30) (0xc0017e17c0) Create stream I0125 11:40:09.475680 8 log.go:172] (0xc000937c30) (0xc0017e17c0) Stream added, broadcasting: 1 I0125 11:40:09.481007 8 log.go:172] (0xc000937c30) Reply frame received for 1 I0125 11:40:09.481070 8 log.go:172] (0xc000937c30) (0xc001ca7180) Create stream I0125 11:40:09.481090 8 log.go:172] (0xc000937c30) (0xc001ca7180) Stream added, broadcasting: 3 I0125 11:40:09.482333 8 log.go:172] (0xc000937c30) Reply frame received for 3 I0125 11:40:09.482359 8 log.go:172] (0xc000937c30) (0xc001ca72c0) Create stream I0125 11:40:09.482372 8 log.go:172] (0xc000937c30) (0xc001ca72c0) Stream added, broadcasting: 5 I0125 11:40:09.483248 8 log.go:172] (0xc000937c30) Reply frame received for 5 I0125 11:40:09.587756 8 log.go:172] (0xc000937c30) Data frame received for 3 I0125 11:40:09.587805 8 log.go:172] (0xc001ca7180) (3) Data frame handling I0125 11:40:09.587830 8 log.go:172] (0xc001ca7180) (3) Data frame sent I0125 11:40:09.706355 8 log.go:172] (0xc000937c30) Data frame received for 1 I0125 11:40:09.706452 8 log.go:172] (0xc000937c30) (0xc001ca7180) Stream removed, broadcasting: 3 I0125 11:40:09.706506 8 log.go:172] (0xc0017e17c0) (1) Data frame handling I0125 11:40:09.706529 8 log.go:172] (0xc0017e17c0) (1) Data frame sent I0125 11:40:09.706585 8 log.go:172] (0xc000937c30) (0xc001ca72c0) Stream removed, broadcasting: 5 I0125 11:40:09.706629 8 log.go:172] (0xc000937c30) (0xc0017e17c0) Stream removed, broadcasting: 1 I0125 11:40:09.706648 8 log.go:172] (0xc000937c30) Go away received I0125 11:40:09.707022 8 log.go:172] (0xc000937c30) (0xc0017e17c0) Stream removed, broadcasting: 1 I0125 11:40:09.707039 8 log.go:172] (0xc000937c30) (0xc001ca7180) Stream removed, broadcasting: 3 I0125 11:40:09.707095 8 log.go:172] (0xc000937c30) (0xc001ca72c0) Stream removed, broadcasting: 5 Jan 25 11:40:09.707: INFO: Exec stderr: "" Jan 25 11:40:09.707: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:09.707: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:09.783128 8 log.go:172] (0xc0018f0630) (0xc001ca74a0) Create stream I0125 11:40:09.783259 8 log.go:172] (0xc0018f0630) (0xc001ca74a0) Stream added, broadcasting: 1 I0125 11:40:09.788784 8 log.go:172] (0xc0018f0630) Reply frame received for 1 I0125 11:40:09.788833 8 log.go:172] (0xc0018f0630) (0xc001aafd60) Create stream I0125 11:40:09.788852 8 log.go:172] (0xc0018f0630) (0xc001aafd60) Stream added, broadcasting: 3 I0125 11:40:09.790999 8 log.go:172] (0xc0018f0630) Reply frame received for 3 I0125 11:40:09.791037 8 log.go:172] (0xc0018f0630) (0xc001ca7540) Create stream I0125 11:40:09.791054 8 log.go:172] (0xc0018f0630) (0xc001ca7540) Stream added, broadcasting: 5 I0125 11:40:09.793643 8 log.go:172] (0xc0018f0630) Reply frame received for 5 I0125 11:40:09.933180 8 log.go:172] (0xc0018f0630) Data frame received for 3 I0125 11:40:09.933403 8 log.go:172] (0xc001aafd60) (3) Data frame handling I0125 11:40:09.933449 8 log.go:172] (0xc001aafd60) (3) Data frame sent I0125 11:40:10.051625 8 log.go:172] (0xc0018f0630) Data frame received for 1 I0125 11:40:10.051892 8 log.go:172] (0xc001ca74a0) (1) Data frame handling I0125 11:40:10.051945 8 log.go:172] (0xc001ca74a0) (1) Data frame sent I0125 11:40:10.052098 8 log.go:172] (0xc0018f0630) (0xc001ca74a0) Stream removed, broadcasting: 1 I0125 11:40:10.052453 8 log.go:172] (0xc0018f0630) (0xc001ca7540) Stream removed, broadcasting: 5 I0125 11:40:10.052553 8 log.go:172] (0xc0018f0630) (0xc001aafd60) Stream removed, broadcasting: 3 I0125 11:40:10.052599 8 log.go:172] (0xc0018f0630) (0xc001ca74a0) Stream removed, broadcasting: 1 I0125 11:40:10.052622 8 log.go:172] (0xc0018f0630) (0xc001aafd60) Stream removed, broadcasting: 3 I0125 11:40:10.052639 8 log.go:172] (0xc0018f0630) (0xc001ca7540) Stream removed, broadcasting: 5 Jan 25 11:40:10.052: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 25 11:40:10.053: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:10.053: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:10.054057 8 log.go:172] (0xc0018f0630) Go away received I0125 11:40:10.120762 8 log.go:172] (0xc0018f0b00) (0xc001ca7720) Create stream I0125 11:40:10.121118 8 log.go:172] (0xc0018f0b00) (0xc001ca7720) Stream added, broadcasting: 1 I0125 11:40:10.231538 8 log.go:172] (0xc0018f0b00) Reply frame received for 1 I0125 11:40:10.231635 8 log.go:172] (0xc0018f0b00) (0xc001aafe00) Create stream I0125 11:40:10.231651 8 log.go:172] (0xc0018f0b00) (0xc001aafe00) Stream added, broadcasting: 3 I0125 11:40:10.233994 8 log.go:172] (0xc0018f0b00) Reply frame received for 3 I0125 11:40:10.234150 8 log.go:172] (0xc0018f0b00) (0xc0010b9f40) Create stream I0125 11:40:10.234175 8 log.go:172] (0xc0018f0b00) (0xc0010b9f40) Stream added, broadcasting: 5 I0125 11:40:10.235641 8 log.go:172] (0xc0018f0b00) Reply frame received for 5 I0125 11:40:10.322667 8 log.go:172] (0xc0018f0b00) Data frame received for 3 I0125 11:40:10.322786 8 log.go:172] (0xc001aafe00) (3) Data frame handling I0125 11:40:10.322826 8 log.go:172] (0xc001aafe00) (3) Data frame sent I0125 11:40:10.469093 8 log.go:172] (0xc0018f0b00) (0xc001aafe00) Stream removed, broadcasting: 3 I0125 11:40:10.469323 8 log.go:172] (0xc0018f0b00) Data frame received for 1 I0125 11:40:10.469474 8 log.go:172] (0xc0018f0b00) (0xc0010b9f40) Stream removed, broadcasting: 5 I0125 11:40:10.469564 8 log.go:172] (0xc001ca7720) (1) Data frame handling I0125 11:40:10.469631 8 log.go:172] (0xc001ca7720) (1) Data frame sent I0125 11:40:10.469652 8 log.go:172] (0xc0018f0b00) (0xc001ca7720) Stream removed, broadcasting: 1 I0125 11:40:10.469689 8 log.go:172] (0xc0018f0b00) Go away received I0125 11:40:10.469877 8 log.go:172] (0xc0018f0b00) (0xc001ca7720) Stream removed, broadcasting: 1 I0125 11:40:10.469898 8 log.go:172] (0xc0018f0b00) (0xc001aafe00) Stream removed, broadcasting: 3 I0125 11:40:10.469909 8 log.go:172] (0xc0018f0b00) (0xc0010b9f40) Stream removed, broadcasting: 5 Jan 25 11:40:10.469: INFO: Exec stderr: "" Jan 25 11:40:10.470: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:10.470: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:10.570033 8 log.go:172] (0xc001120580) (0xc001db0280) Create stream I0125 11:40:10.570187 8 log.go:172] (0xc001120580) (0xc001db0280) Stream added, broadcasting: 1 I0125 11:40:10.607250 8 log.go:172] (0xc001120580) Reply frame received for 1 I0125 11:40:10.607406 8 log.go:172] (0xc001120580) (0xc001862000) Create stream I0125 11:40:10.607426 8 log.go:172] (0xc001120580) (0xc001862000) Stream added, broadcasting: 3 I0125 11:40:10.609517 8 log.go:172] (0xc001120580) Reply frame received for 3 I0125 11:40:10.609591 8 log.go:172] (0xc001120580) (0xc001dca000) Create stream I0125 11:40:10.609608 8 log.go:172] (0xc001120580) (0xc001dca000) Stream added, broadcasting: 5 I0125 11:40:10.610899 8 log.go:172] (0xc001120580) Reply frame received for 5 I0125 11:40:10.725090 8 log.go:172] (0xc001120580) Data frame received for 3 I0125 11:40:10.725149 8 log.go:172] (0xc001862000) (3) Data frame handling I0125 11:40:10.725166 8 log.go:172] (0xc001862000) (3) Data frame sent I0125 11:40:10.852637 8 log.go:172] (0xc001120580) (0xc001862000) Stream removed, broadcasting: 3 I0125 11:40:10.852729 8 log.go:172] (0xc001120580) Data frame received for 1 I0125 11:40:10.852761 8 log.go:172] (0xc001db0280) (1) Data frame handling I0125 11:40:10.852792 8 log.go:172] (0xc001db0280) (1) Data frame sent I0125 11:40:10.852801 8 log.go:172] (0xc001120580) (0xc001db0280) Stream removed, broadcasting: 1 I0125 11:40:10.852828 8 log.go:172] (0xc001120580) (0xc001dca000) Stream removed, broadcasting: 5 I0125 11:40:10.853033 8 log.go:172] (0xc001120580) (0xc001db0280) Stream removed, broadcasting: 1 I0125 11:40:10.853052 8 log.go:172] (0xc001120580) (0xc001862000) Stream removed, broadcasting: 3 I0125 11:40:10.853059 8 log.go:172] (0xc001120580) (0xc001dca000) Stream removed, broadcasting: 5 Jan 25 11:40:10.853: INFO: Exec stderr: "" Jan 25 11:40:10.853: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:10.853: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:10.853460 8 log.go:172] (0xc001120580) Go away received I0125 11:40:10.919999 8 log.go:172] (0xc001120000) (0xc00219e140) Create stream I0125 11:40:10.920112 8 log.go:172] (0xc001120000) (0xc00219e140) Stream added, broadcasting: 1 I0125 11:40:10.923524 8 log.go:172] (0xc001120000) Reply frame received for 1 I0125 11:40:10.923575 8 log.go:172] (0xc001120000) (0xc001862140) Create stream I0125 11:40:10.923590 8 log.go:172] (0xc001120000) (0xc001862140) Stream added, broadcasting: 3 I0125 11:40:10.924452 8 log.go:172] (0xc001120000) Reply frame received for 3 I0125 11:40:10.924487 8 log.go:172] (0xc001120000) (0xc001910000) Create stream I0125 11:40:10.924497 8 log.go:172] (0xc001120000) (0xc001910000) Stream added, broadcasting: 5 I0125 11:40:10.925407 8 log.go:172] (0xc001120000) Reply frame received for 5 I0125 11:40:11.050038 8 log.go:172] (0xc001120000) Data frame received for 3 I0125 11:40:11.050231 8 log.go:172] (0xc001862140) (3) Data frame handling I0125 11:40:11.050312 8 log.go:172] (0xc001862140) (3) Data frame sent I0125 11:40:11.174489 8 log.go:172] (0xc001120000) Data frame received for 1 I0125 11:40:11.174607 8 log.go:172] (0xc001120000) (0xc001862140) Stream removed, broadcasting: 3 I0125 11:40:11.174736 8 log.go:172] (0xc00219e140) (1) Data frame handling I0125 11:40:11.174826 8 log.go:172] (0xc00219e140) (1) Data frame sent I0125 11:40:11.174885 8 log.go:172] (0xc001120000) (0xc001910000) Stream removed, broadcasting: 5 I0125 11:40:11.174934 8 log.go:172] (0xc001120000) (0xc00219e140) Stream removed, broadcasting: 1 I0125 11:40:11.174969 8 log.go:172] (0xc001120000) Go away received I0125 11:40:11.175062 8 log.go:172] (0xc001120000) (0xc00219e140) Stream removed, broadcasting: 1 I0125 11:40:11.175075 8 log.go:172] (0xc001120000) (0xc001862140) Stream removed, broadcasting: 3 I0125 11:40:11.175081 8 log.go:172] (0xc001120000) (0xc001910000) Stream removed, broadcasting: 5 Jan 25 11:40:11.175: INFO: Exec stderr: "" Jan 25 11:40:11.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6ggjg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 11:40:11.175: INFO: >>> kubeConfig: /root/.kube/config I0125 11:40:11.244558 8 log.go:172] (0xc000937600) (0xc001768280) Create stream I0125 11:40:11.244727 8 log.go:172] (0xc000937600) (0xc001768280) Stream added, broadcasting: 1 I0125 11:40:11.252076 8 log.go:172] (0xc000937600) Reply frame received for 1 I0125 11:40:11.252126 8 log.go:172] (0xc000937600) (0xc00219e1e0) Create stream I0125 11:40:11.252137 8 log.go:172] (0xc000937600) (0xc00219e1e0) Stream added, broadcasting: 3 I0125 11:40:11.253534 8 log.go:172] (0xc000937600) Reply frame received for 3 I0125 11:40:11.253557 8 log.go:172] (0xc000937600) (0xc0017683c0) Create stream I0125 11:40:11.253566 8 log.go:172] (0xc000937600) (0xc0017683c0) Stream added, broadcasting: 5 I0125 11:40:11.254770 8 log.go:172] (0xc000937600) Reply frame received for 5 I0125 11:40:11.355217 8 log.go:172] (0xc000937600) Data frame received for 3 I0125 11:40:11.355321 8 log.go:172] (0xc00219e1e0) (3) Data frame handling I0125 11:40:11.355352 8 log.go:172] (0xc00219e1e0) (3) Data frame sent I0125 11:40:11.469642 8 log.go:172] (0xc000937600) (0xc00219e1e0) Stream removed, broadcasting: 3 I0125 11:40:11.469895 8 log.go:172] (0xc000937600) Data frame received for 1 I0125 11:40:11.469957 8 log.go:172] (0xc001768280) (1) Data frame handling I0125 11:40:11.470001 8 log.go:172] (0xc000937600) (0xc0017683c0) Stream removed, broadcasting: 5 I0125 11:40:11.470092 8 log.go:172] (0xc001768280) (1) Data frame sent I0125 11:40:11.470131 8 log.go:172] (0xc000937600) (0xc001768280) Stream removed, broadcasting: 1 I0125 11:40:11.470160 8 log.go:172] (0xc000937600) Go away received I0125 11:40:11.470508 8 log.go:172] (0xc000937600) (0xc001768280) Stream removed, broadcasting: 1 I0125 11:40:11.470526 8 log.go:172] (0xc000937600) (0xc00219e1e0) Stream removed, broadcasting: 3 I0125 11:40:11.470539 8 log.go:172] (0xc000937600) (0xc0017683c0) Stream removed, broadcasting: 5 Jan 25 11:40:11.470: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:40:11.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-6ggjg" for this suite. Jan 25 11:41:03.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:41:03.783: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-6ggjg, resource: bindings, ignored listing per whitelist Jan 25 11:41:03.900: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-6ggjg deletion completed in 52.419067398s • [SLOW TEST:85.266 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:41:03.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-916ae654-3f67-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:41:04.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-xzjpm" to be "success or failure" Jan 25 11:41:04.449: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 169.757842ms Jan 25 11:41:06.896: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617198681s Jan 25 11:41:09.809: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 5.529838178s Jan 25 11:41:11.830: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.551411642s Jan 25 11:41:15.513: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.234062843s Jan 25 11:41:17.526: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.247384087s Jan 25 11:41:19.727: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.447818667s Jan 25 11:41:21.743: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.464112405s STEP: Saw pod success Jan 25 11:41:21.743: INFO: Pod "pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:41:21.750: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:41:22.699: INFO: Waiting for pod pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006 to disappear Jan 25 11:41:22.749: INFO: Pod pod-configmaps-916c97da-3f67-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:41:22.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xzjpm" for this suite. Jan 25 11:41:31.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:41:31.131: INFO: namespace: e2e-tests-configmap-xzjpm, resource: bindings, ignored listing per whitelist Jan 25 11:41:31.266: INFO: namespace e2e-tests-configmap-xzjpm deletion completed in 8.507825743s • [SLOW TEST:27.366 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:41:31.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-8qz65/configmap-test-a1aa6d32-3f67-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:41:31.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-8qz65" to be "success or failure" Jan 25 11:41:31.670: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.752222ms Jan 25 11:41:33.690: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038399375s Jan 25 11:41:35.705: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053361288s Jan 25 11:41:40.151: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.499776794s Jan 25 11:41:42.202: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.5507336s Jan 25 11:41:44.228: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576821576s Jan 25 11:41:46.648: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.996958578s STEP: Saw pod success Jan 25 11:41:46.649: INFO: Pod "pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:41:46.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006 container env-test: STEP: delete the pod Jan 25 11:41:46.948: INFO: Waiting for pod pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006 to disappear Jan 25 11:41:46.995: INFO: Pod pod-configmaps-a1ac20d7-3f67-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:41:46.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8qz65" for this suite. Jan 25 11:41:53.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:41:53.244: INFO: namespace: e2e-tests-configmap-8qz65, resource: bindings, ignored listing per whitelist Jan 25 11:41:53.328: INFO: namespace e2e-tests-configmap-8qz65 deletion completed in 6.322154699s • [SLOW TEST:22.062 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:41:53.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 25 11:41:55.262: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406232,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 25 11:41:55.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406234,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 25 11:41:55.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406235,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 25 11:42:05.440: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406249,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 25 11:42:05.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406250,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 25 11:42:05.440: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mzt6p,SelfLink:/api/v1/namespaces/e2e-tests-watch-mzt6p/configmaps/e2e-watch-test-label-changed,UID:afaf1a4e-3f67-11ea-a994-fa163e34d433,ResourceVersion:19406251,Generation:0,CreationTimestamp:2020-01-25 11:41:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:42:05.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-mzt6p" for this suite. Jan 25 11:42:11.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:42:11.718: INFO: namespace: e2e-tests-watch-mzt6p, resource: bindings, ignored listing per whitelist Jan 25 11:42:11.766: INFO: namespace e2e-tests-watch-mzt6p deletion completed in 6.314473827s • [SLOW TEST:18.438 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:42:11.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 25 11:42:22.625: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b9d07c95-3f67-11ea-8a8b-0242ac110006" Jan 25 11:42:22.626: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b9d07c95-3f67-11ea-8a8b-0242ac110006" in namespace "e2e-tests-pods-s44dt" to be "terminated due to deadline exceeded" Jan 25 11:42:22.662: INFO: Pod "pod-update-activedeadlineseconds-b9d07c95-3f67-11ea-8a8b-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 36.115797ms Jan 25 11:42:24.711: INFO: Pod "pod-update-activedeadlineseconds-b9d07c95-3f67-11ea-8a8b-0242ac110006": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.0852457s Jan 25 11:42:24.711: INFO: Pod "pod-update-activedeadlineseconds-b9d07c95-3f67-11ea-8a8b-0242ac110006" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:42:24.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-s44dt" for this suite. Jan 25 11:42:30.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:42:30.925: INFO: namespace: e2e-tests-pods-s44dt, resource: bindings, ignored listing per whitelist Jan 25 11:42:30.944: INFO: namespace e2e-tests-pods-s44dt deletion completed in 6.218300426s • [SLOW TEST:19.178 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:42:30.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nknsz Jan 25 11:42:41.198: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nknsz STEP: checking the pod's current state and verifying that restartCount is present Jan 25 11:42:41.205: INFO: Initial restart count of pod liveness-exec is 0 Jan 25 11:43:34.368: INFO: Restart count of pod e2e-tests-container-probe-nknsz/liveness-exec is now 1 (53.163228716s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:43:34.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nknsz" for this suite. Jan 25 11:43:42.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:43:42.831: INFO: namespace: e2e-tests-container-probe-nknsz, resource: bindings, ignored listing per whitelist Jan 25 11:43:42.840: INFO: namespace e2e-tests-container-probe-nknsz deletion completed in 8.357168648s • [SLOW TEST:71.895 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:43:42.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-f01aaf7b-3f67-11ea-8a8b-0242ac110006 STEP: Creating secret with name s-test-opt-upd-f01ab0d9-3f67-11ea-8a8b-0242ac110006 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f01aaf7b-3f67-11ea-8a8b-0242ac110006 STEP: Updating secret s-test-opt-upd-f01ab0d9-3f67-11ea-8a8b-0242ac110006 STEP: Creating secret with name s-test-opt-create-f01ab0fd-3f67-11ea-8a8b-0242ac110006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:43:59.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d6dm9" for this suite. Jan 25 11:44:25.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:44:25.742: INFO: namespace: e2e-tests-secrets-d6dm9, resource: bindings, ignored listing per whitelist Jan 25 11:44:25.831: INFO: namespace e2e-tests-secrets-d6dm9 deletion completed in 26.369944043s • [SLOW TEST:42.990 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:44:25.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 25 11:44:26.204: INFO: Waiting up to 5m0s for pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-tx4w9" to be "success or failure" Jan 25 11:44:26.213: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.642629ms Jan 25 11:44:28.244: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04000166s Jan 25 11:44:30.262: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057807971s Jan 25 11:44:32.278: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073380324s Jan 25 11:44:34.287: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082476412s Jan 25 11:44:36.303: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098286424s STEP: Saw pod success Jan 25 11:44:36.303: INFO: Pod "downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:44:36.307: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006 container dapi-container: STEP: delete the pod Jan 25 11:44:36.381: INFO: Waiting for pod downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:44:36.406: INFO: Pod downward-api-09ce4977-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:44:36.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tx4w9" for this suite. Jan 25 11:44:43.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:44:43.472: INFO: namespace: e2e-tests-downward-api-tx4w9, resource: bindings, ignored listing per whitelist Jan 25 11:44:43.553: INFO: namespace e2e-tests-downward-api-tx4w9 deletion completed in 7.118967048s • [SLOW TEST:17.723 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:44:43.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 25 11:44:43.954: INFO: Number of nodes with available pods: 0 Jan 25 11:44:43.954: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:44.985: INFO: Number of nodes with available pods: 0 Jan 25 11:44:44.985: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:45.989: INFO: Number of nodes with available pods: 0 Jan 25 11:44:45.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:46.976: INFO: Number of nodes with available pods: 0 Jan 25 11:44:46.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:47.978: INFO: Number of nodes with available pods: 0 Jan 25 11:44:47.978: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:48.994: INFO: Number of nodes with available pods: 0 Jan 25 11:44:48.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:50.205: INFO: Number of nodes with available pods: 0 Jan 25 11:44:50.205: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:50.976: INFO: Number of nodes with available pods: 0 Jan 25 11:44:50.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:51.971: INFO: Number of nodes with available pods: 0 Jan 25 11:44:51.971: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 25 11:44:52.978: INFO: Number of nodes with available pods: 1 Jan 25 11:44:52.979: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 25 11:44:53.161: INFO: Number of nodes with available pods: 1 Jan 25 11:44:53.161: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mlwdb, will wait for the garbage collector to delete the pods Jan 25 11:44:55.348: INFO: Deleting DaemonSet.extensions daemon-set took: 66.119893ms Jan 25 11:44:55.449: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.017911ms Jan 25 11:44:59.863: INFO: Number of nodes with available pods: 0 Jan 25 11:44:59.863: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 11:44:59.870: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mlwdb/daemonsets","resourceVersion":"19406628"},"items":null} Jan 25 11:44:59.875: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mlwdb/pods","resourceVersion":"19406628"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:44:59.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mlwdb" for this suite. Jan 25 11:45:06.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:45:06.136: INFO: namespace: e2e-tests-daemonsets-mlwdb, resource: bindings, ignored listing per whitelist Jan 25 11:45:06.194: INFO: namespace e2e-tests-daemonsets-mlwdb deletion completed in 6.301222784s • [SLOW TEST:22.640 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:45:06.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006 Jan 25 11:45:06.420: INFO: Pod name my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006: Found 0 pods out of 1 Jan 25 11:45:11.935: INFO: Pod name my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006: Found 1 pods out of 1 Jan 25 11:45:11.935: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006" are running Jan 25 11:45:15.968: INFO: Pod "my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006-n96k8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:45:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:45:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:45:06 +0000 UTC Reason: Message:}]) Jan 25 11:45:15.968: INFO: Trying to dial the pod Jan 25 11:45:21.057: INFO: Controller my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006: Got expected result from replica 1 [my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006-n96k8]: "my-hostname-basic-21c48429-3f68-11ea-8a8b-0242ac110006-n96k8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:45:21.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-lqsqn" for this suite. Jan 25 11:45:27.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:45:27.232: INFO: namespace: e2e-tests-replication-controller-lqsqn, resource: bindings, ignored listing per whitelist Jan 25 11:45:27.291: INFO: namespace e2e-tests-replication-controller-lqsqn deletion completed in 6.222465829s • [SLOW TEST:21.096 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:45:27.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-h8s72 Jan 25 11:45:39.595: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-h8s72 STEP: checking the pod's current state and verifying that restartCount is present Jan 25 11:45:39.599: INFO: Initial restart count of pod liveness-http is 0 Jan 25 11:46:04.293: INFO: Restart count of pod e2e-tests-container-probe-h8s72/liveness-http is now 1 (24.693661838s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:46:04.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-h8s72" for this suite. Jan 25 11:46:10.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:46:10.767: INFO: namespace: e2e-tests-container-probe-h8s72, resource: bindings, ignored listing per whitelist Jan 25 11:46:10.792: INFO: namespace e2e-tests-container-probe-h8s72 deletion completed in 6.387444786s • [SLOW TEST:43.501 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:46:10.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 25 11:46:10.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:13.293: INFO: stderr: "" Jan 25 11:46:13.294: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 11:46:13.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:13.648: INFO: stderr: "" Jan 25 11:46:13.648: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " Jan 25 11:46:13.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lxh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:13.792: INFO: stderr: "" Jan 25 11:46:13.792: INFO: stdout: "" Jan 25 11:46:13.792: INFO: update-demo-nautilus-7lxh4 is created but not running Jan 25 11:46:18.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:19.462: INFO: stderr: "" Jan 25 11:46:19.462: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " Jan 25 11:46:19.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lxh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:20.472: INFO: stderr: "" Jan 25 11:46:20.472: INFO: stdout: "" Jan 25 11:46:20.472: INFO: update-demo-nautilus-7lxh4 is created but not running Jan 25 11:46:25.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:25.642: INFO: stderr: "" Jan 25 11:46:25.642: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " Jan 25 11:46:25.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lxh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:25.768: INFO: stderr: "" Jan 25 11:46:25.768: INFO: stdout: "true" Jan 25 11:46:25.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7lxh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:25.946: INFO: stderr: "" Jan 25 11:46:25.946: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:25.946: INFO: validating pod update-demo-nautilus-7lxh4 Jan 25 11:46:25.971: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:25.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:25.971: INFO: update-demo-nautilus-7lxh4 is verified up and running Jan 25 11:46:25.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:26.104: INFO: stderr: "" Jan 25 11:46:26.104: INFO: stdout: "true" Jan 25 11:46:26.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:26.236: INFO: stderr: "" Jan 25 11:46:26.236: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:26.236: INFO: validating pod update-demo-nautilus-gq6rm Jan 25 11:46:26.245: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:26.245: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:26.245: INFO: update-demo-nautilus-gq6rm is verified up and running STEP: scaling down the replication controller Jan 25 11:46:26.248: INFO: scanned /root for discovery docs: Jan 25 11:46:26.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:27.421: INFO: stderr: "" Jan 25 11:46:27.421: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 11:46:27.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:27.610: INFO: stderr: "" Jan 25 11:46:27.610: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 25 11:46:32.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:32.746: INFO: stderr: "" Jan 25 11:46:32.747: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 25 11:46:37.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:37.963: INFO: stderr: "" Jan 25 11:46:37.963: INFO: stdout: "update-demo-nautilus-7lxh4 update-demo-nautilus-gq6rm " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 25 11:46:42.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:43.172: INFO: stderr: "" Jan 25 11:46:43.173: INFO: stdout: "update-demo-nautilus-gq6rm " Jan 25 11:46:43.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:43.329: INFO: stderr: "" Jan 25 11:46:43.329: INFO: stdout: "true" Jan 25 11:46:43.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:43.437: INFO: stderr: "" Jan 25 11:46:43.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:43.438: INFO: validating pod update-demo-nautilus-gq6rm Jan 25 11:46:43.451: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:43.451: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:43.451: INFO: update-demo-nautilus-gq6rm is verified up and running STEP: scaling up the replication controller Jan 25 11:46:43.454: INFO: scanned /root for discovery docs: Jan 25 11:46:43.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:44.668: INFO: stderr: "" Jan 25 11:46:44.668: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 11:46:44.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:44.789: INFO: stderr: "" Jan 25 11:46:44.789: INFO: stdout: "update-demo-nautilus-gq6rm update-demo-nautilus-m7h5q " Jan 25 11:46:44.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:44.898: INFO: stderr: "" Jan 25 11:46:44.898: INFO: stdout: "true" Jan 25 11:46:44.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:45.001: INFO: stderr: "" Jan 25 11:46:45.001: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:45.001: INFO: validating pod update-demo-nautilus-gq6rm Jan 25 11:46:45.017: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:45.017: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:45.018: INFO: update-demo-nautilus-gq6rm is verified up and running Jan 25 11:46:45.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7h5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:45.902: INFO: stderr: "" Jan 25 11:46:45.902: INFO: stdout: "" Jan 25 11:46:45.902: INFO: update-demo-nautilus-m7h5q is created but not running Jan 25 11:46:50.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:51.073: INFO: stderr: "" Jan 25 11:46:51.073: INFO: stdout: "update-demo-nautilus-gq6rm update-demo-nautilus-m7h5q " Jan 25 11:46:51.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:51.181: INFO: stderr: "" Jan 25 11:46:51.181: INFO: stdout: "true" Jan 25 11:46:51.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:51.325: INFO: stderr: "" Jan 25 11:46:51.325: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:51.325: INFO: validating pod update-demo-nautilus-gq6rm Jan 25 11:46:51.340: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:51.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:51.340: INFO: update-demo-nautilus-gq6rm is verified up and running Jan 25 11:46:51.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7h5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:51.482: INFO: stderr: "" Jan 25 11:46:51.482: INFO: stdout: "" Jan 25 11:46:51.482: INFO: update-demo-nautilus-m7h5q is created but not running Jan 25 11:46:56.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:56.746: INFO: stderr: "" Jan 25 11:46:56.746: INFO: stdout: "update-demo-nautilus-gq6rm update-demo-nautilus-m7h5q " Jan 25 11:46:56.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:56.889: INFO: stderr: "" Jan 25 11:46:56.889: INFO: stdout: "true" Jan 25 11:46:56.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gq6rm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:57.040: INFO: stderr: "" Jan 25 11:46:57.040: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:57.040: INFO: validating pod update-demo-nautilus-gq6rm Jan 25 11:46:57.052: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:46:57.052: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:46:57.052: INFO: update-demo-nautilus-gq6rm is verified up and running Jan 25 11:46:57.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7h5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:57.189: INFO: stderr: "" Jan 25 11:46:57.190: INFO: stdout: "true" Jan 25 11:46:57.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m7h5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:46:57.329: INFO: stderr: "" Jan 25 11:46:57.329: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 11:46:57.329: INFO: validating pod update-demo-nautilus-m7h5q Jan 25 11:47:00.399: INFO: got data: { "image": "nautilus.jpg" } Jan 25 11:47:00.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 11:47:00.399: INFO: update-demo-nautilus-m7h5q is verified up and running STEP: using delete to clean up resources Jan 25 11:47:00.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:47:00.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 11:47:00.753: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 25 11:47:00.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z4tvt' Jan 25 11:47:00.907: INFO: stderr: "No resources found.\n" Jan 25 11:47:00.908: INFO: stdout: "" Jan 25 11:47:00.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z4tvt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 11:47:01.163: INFO: stderr: "" Jan 25 11:47:01.163: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:47:01.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z4tvt" for this suite. Jan 25 11:47:25.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:47:25.327: INFO: namespace: e2e-tests-kubectl-z4tvt, resource: bindings, ignored listing per whitelist Jan 25 11:47:25.395: INFO: namespace e2e-tests-kubectl-z4tvt deletion completed in 24.208451147s • [SLOW TEST:74.603 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:47:25.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 25 11:47:25.605: INFO: Waiting up to 5m0s for pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-var-expansion-7jb88" to be "success or failure" Jan 25 11:47:25.687: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 81.467086ms Jan 25 11:47:27.703: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097187182s Jan 25 11:47:29.717: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111132406s Jan 25 11:47:31.866: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260473258s Jan 25 11:47:33.881: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275646559s Jan 25 11:47:36.344: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.738817977s STEP: Saw pod success Jan 25 11:47:36.345: INFO: Pod "var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:47:36.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006 container dapi-container: STEP: delete the pod Jan 25 11:47:36.808: INFO: Waiting for pod var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:47:36.826: INFO: Pod var-expansion-74b96360-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:47:36.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7jb88" for this suite. Jan 25 11:47:44.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:47:45.182: INFO: namespace: e2e-tests-var-expansion-7jb88, resource: bindings, ignored listing per whitelist Jan 25 11:47:45.195: INFO: namespace e2e-tests-var-expansion-7jb88 deletion completed in 8.344778363s • [SLOW TEST:19.799 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:47:45.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-80809537-3f68-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume secrets Jan 25 11:47:45.390: INFO: Waiting up to 5m0s for pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-6v4b5" to be "success or failure" Jan 25 11:47:45.400: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.490145ms Jan 25 11:47:47.423: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03231456s Jan 25 11:47:49.447: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056481043s Jan 25 11:47:51.477: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086570856s Jan 25 11:47:53.489: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098676862s Jan 25 11:47:55.501: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110866693s STEP: Saw pod success Jan 25 11:47:55.501: INFO: Pod "pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:47:55.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006 container secret-env-test: STEP: delete the pod Jan 25 11:47:56.099: INFO: Waiting for pod pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:47:56.244: INFO: Pod pod-secrets-8084d1a1-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:47:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-6v4b5" for this suite. Jan 25 11:48:02.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:48:02.322: INFO: namespace: e2e-tests-secrets-6v4b5, resource: bindings, ignored listing per whitelist Jan 25 11:48:02.674: INFO: namespace e2e-tests-secrets-6v4b5 deletion completed in 6.413814606s • [SLOW TEST:17.480 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:48:02.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8b0247b3-3f68-11ea-8a8b-0242ac110006 STEP: Creating a pod to test consume configMaps Jan 25 11:48:02.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-skjkp" to be "success or failure" Jan 25 11:48:03.085: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 92.158258ms Jan 25 11:48:05.100: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107126116s Jan 25 11:48:07.186: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192473101s Jan 25 11:48:09.207: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213441172s Jan 25 11:48:11.387: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.393826527s Jan 25 11:48:13.852: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.858630651s STEP: Saw pod success Jan 25 11:48:13.852: INFO: Pod "pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:48:13.865: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006 container configmap-volume-test: STEP: delete the pod Jan 25 11:48:14.939: INFO: Waiting for pod pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:48:15.137: INFO: Pod pod-configmaps-8b062cb0-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:48:15.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-skjkp" for this suite. Jan 25 11:48:21.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:48:21.322: INFO: namespace: e2e-tests-configmap-skjkp, resource: bindings, ignored listing per whitelist Jan 25 11:48:21.362: INFO: namespace e2e-tests-configmap-skjkp deletion completed in 6.212814327s • [SLOW TEST:18.687 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:48:21.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 11:48:21.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-2hxpf" to be "success or failure" Jan 25 11:48:21.597: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092823ms Jan 25 11:48:24.221: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636258637s Jan 25 11:48:26.240: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.655281084s Jan 25 11:48:28.283: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.698087664s Jan 25 11:48:30.305: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72030367s Jan 25 11:48:32.372: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.786695899s STEP: Saw pod success Jan 25 11:48:32.372: INFO: Pod "downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:48:32.382: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 11:48:32.568: INFO: Waiting for pod downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:48:32.595: INFO: Pod downwardapi-volume-961a6a65-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:48:32.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2hxpf" for this suite. Jan 25 11:48:38.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:48:39.000: INFO: namespace: e2e-tests-downward-api-2hxpf, resource: bindings, ignored listing per whitelist Jan 25 11:48:39.044: INFO: namespace e2e-tests-downward-api-2hxpf deletion completed in 6.436110563s • [SLOW TEST:17.681 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:48:39.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 25 11:48:39.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-x7tdr" to be "success or failure" Jan 25 11:48:39.231: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.732479ms Jan 25 11:48:41.260: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040770131s Jan 25 11:48:43.279: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060544472s Jan 25 11:48:46.012: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.793497358s Jan 25 11:48:48.025: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806064788s Jan 25 11:48:50.042: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.823645765s STEP: Saw pod success Jan 25 11:48:50.043: INFO: Pod "downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure" Jan 25 11:48:50.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006 container client-container: STEP: delete the pod Jan 25 11:48:50.568: INFO: Waiting for pod downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006 to disappear Jan 25 11:48:50.580: INFO: Pod downwardapi-volume-a09b720e-3f68-11ea-8a8b-0242ac110006 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 25 11:48:50.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x7tdr" for this suite. Jan 25 11:48:56.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 25 11:48:56.764: INFO: namespace: e2e-tests-downward-api-x7tdr, resource: bindings, ignored listing per whitelist Jan 25 11:48:56.778: INFO: namespace e2e-tests-downward-api-x7tdr deletion completed in 6.187592098s • [SLOW TEST:17.734 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 25 11:48:56.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 25 11:48:56.993: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.458635ms)
Jan 25 11:48:56.999: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.70502ms)
Jan 25 11:48:57.004: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.528279ms)
Jan 25 11:48:57.010: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.490916ms)
Jan 25 11:48:57.015: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.774884ms)
Jan 25 11:48:57.021: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.113149ms)
Jan 25 11:48:57.026: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.296676ms)
Jan 25 11:48:57.030: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.538826ms)
Jan 25 11:48:57.034: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.916192ms)
Jan 25 11:48:57.039: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.047231ms)
Jan 25 11:48:57.047: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.926995ms)
Jan 25 11:48:57.056: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.752276ms)
Jan 25 11:48:57.063: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.019884ms)
Jan 25 11:48:57.124: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.509766ms)
Jan 25 11:48:57.134: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.148549ms)
Jan 25 11:48:57.143: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.509877ms)
Jan 25 11:48:57.152: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.711152ms)
Jan 25 11:48:57.160: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.481076ms)
Jan 25 11:48:57.166: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.361981ms)
Jan 25 11:48:57.173: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.875113ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:48:57.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-gv7sf" for this suite.
Jan 25 11:49:03.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:49:03.328: INFO: namespace: e2e-tests-proxy-gv7sf, resource: bindings, ignored listing per whitelist
Jan 25 11:49:03.412: INFO: namespace e2e-tests-proxy-gv7sf deletion completed in 6.233930412s

• [SLOW TEST:6.634 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:49:03.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-af3016f6-3f68-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 11:49:03.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-grl7l" to be "success or failure"
Jan 25 11:49:03.725: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.767306ms
Jan 25 11:49:06.013: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302374523s
Jan 25 11:49:08.031: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320626607s
Jan 25 11:49:10.611: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.900275086s
Jan 25 11:49:12.635: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.924181157s
STEP: Saw pod success
Jan 25 11:49:12.635: INFO: Pod "pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:49:12.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 11:49:12.775: INFO: Waiting for pod pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:49:12.783: INFO: Pod pod-projected-configmaps-af3267a7-3f68-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:49:12.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-grl7l" for this suite.
Jan 25 11:49:18.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:49:18.974: INFO: namespace: e2e-tests-projected-grl7l, resource: bindings, ignored listing per whitelist
Jan 25 11:49:19.033: INFO: namespace e2e-tests-projected-grl7l deletion completed in 6.242693407s

• [SLOW TEST:15.620 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:49:19.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b88f85e3-3f68-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 11:49:19.397: INFO: Waiting up to 5m0s for pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-wxk2x" to be "success or failure"
Jan 25 11:49:19.415: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 17.885722ms
Jan 25 11:49:21.431: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034307134s
Jan 25 11:49:23.460: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062918192s
Jan 25 11:49:25.485: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08866303s
Jan 25 11:49:27.741: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344551492s
Jan 25 11:49:29.779: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.382616226s
STEP: Saw pod success
Jan 25 11:49:29.780: INFO: Pod "pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:49:29.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:49:30.398: INFO: Waiting for pod pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:49:30.408: INFO: Pod pod-secrets-b8908710-3f68-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:49:30.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wxk2x" for this suite.
Jan 25 11:49:36.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:49:36.642: INFO: namespace: e2e-tests-secrets-wxk2x, resource: bindings, ignored listing per whitelist
Jan 25 11:49:36.760: INFO: namespace e2e-tests-secrets-wxk2x deletion completed in 6.344831262s

• [SLOW TEST:17.727 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:49:36.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:49:37.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-stzqt" for this suite.
Jan 25 11:49:44.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:49:44.341: INFO: namespace: e2e-tests-kubelet-test-stzqt, resource: bindings, ignored listing per whitelist
Jan 25 11:49:44.358: INFO: namespace e2e-tests-kubelet-test-stzqt deletion completed in 6.51334122s

• [SLOW TEST:7.597 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:49:44.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-jnxqp/configmap-test-c788e4f5-3f68-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 11:49:44.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-jnxqp" to be "success or failure"
Jan 25 11:49:44.686: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.999272ms
Jan 25 11:49:46.950: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280896146s
Jan 25 11:49:48.971: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301005627s
Jan 25 11:49:51.213: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543523873s
Jan 25 11:49:53.231: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561309553s
Jan 25 11:49:55.567: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.897020386s
Jan 25 11:49:57.576: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.906397293s
STEP: Saw pod success
Jan 25 11:49:57.576: INFO: Pod "pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:49:57.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006 container env-test: 
STEP: delete the pod
Jan 25 11:49:57.905: INFO: Waiting for pod pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:49:57.929: INFO: Pod pod-configmaps-c79f7125-3f68-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:49:57.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jnxqp" for this suite.
Jan 25 11:50:04.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:50:04.061: INFO: namespace: e2e-tests-configmap-jnxqp, resource: bindings, ignored listing per whitelist
Jan 25 11:50:04.115: INFO: namespace e2e-tests-configmap-jnxqp deletion completed in 6.14347161s

• [SLOW TEST:19.757 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:50:04.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0125 11:50:14.427598       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 11:50:14.427: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:50:14.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-449gl" for this suite.
Jan 25 11:50:20.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:50:20.569: INFO: namespace: e2e-tests-gc-449gl, resource: bindings, ignored listing per whitelist
Jan 25 11:50:20.674: INFO: namespace e2e-tests-gc-449gl deletion completed in 6.24233848s

• [SLOW TEST:16.558 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:50:20.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fx6xn
Jan 25 11:50:30.830: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fx6xn
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:50:30.837: INFO: Initial restart count of pod liveness-http is 0
Jan 25 11:50:53.695: INFO: Restart count of pod e2e-tests-container-probe-fx6xn/liveness-http is now 1 (22.85787673s elapsed)
Jan 25 11:51:14.084: INFO: Restart count of pod e2e-tests-container-probe-fx6xn/liveness-http is now 2 (43.246799173s elapsed)
Jan 25 11:51:32.662: INFO: Restart count of pod e2e-tests-container-probe-fx6xn/liveness-http is now 3 (1m1.825449348s elapsed)
Jan 25 11:51:52.888: INFO: Restart count of pod e2e-tests-container-probe-fx6xn/liveness-http is now 4 (1m22.051434033s elapsed)
Jan 25 11:53:01.621: INFO: Restart count of pod e2e-tests-container-probe-fx6xn/liveness-http is now 5 (2m30.784151196s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:53:01.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fx6xn" for this suite.
Jan 25 11:53:07.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:53:07.943: INFO: namespace: e2e-tests-container-probe-fx6xn, resource: bindings, ignored listing per whitelist
Jan 25 11:53:07.997: INFO: namespace e2e-tests-container-probe-fx6xn deletion completed in 6.253995855s

• [SLOW TEST:167.323 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:53:07.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:53:08.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-ndc2f" to be "success or failure"
Jan 25 11:53:08.190: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 66.133613ms
Jan 25 11:53:10.206: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082082178s
Jan 25 11:53:12.226: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101432949s
Jan 25 11:53:14.258: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133374291s
Jan 25 11:53:16.738: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.613570398s
Jan 25 11:53:19.226: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.101777783s
STEP: Saw pod success
Jan 25 11:53:19.226: INFO: Pod "downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:53:19.236: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 11:53:19.612: INFO: Waiting for pod downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:53:19.628: INFO: Pod downwardapi-volume-40e56c23-3f69-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:53:19.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ndc2f" for this suite.
Jan 25 11:53:25.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:53:25.945: INFO: namespace: e2e-tests-projected-ndc2f, resource: bindings, ignored listing per whitelist
Jan 25 11:53:25.964: INFO: namespace e2e-tests-projected-ndc2f deletion completed in 6.323762999s

• [SLOW TEST:17.967 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:53:25.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 11:53:26.247: INFO: Waiting up to 5m0s for pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-62m2c" to be "success or failure"
Jan 25 11:53:26.385: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 137.980779ms
Jan 25 11:53:28.522: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274809951s
Jan 25 11:53:30.552: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304952222s
Jan 25 11:53:32.635: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387874048s
Jan 25 11:53:34.690: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.442529397s
Jan 25 11:53:36.755: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507411506s
STEP: Saw pod success
Jan 25 11:53:36.755: INFO: Pod "pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:53:36.763: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 11:53:36.914: INFO: Waiting for pod pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:53:36.921: INFO: Pod pod-4bb2fc82-3f69-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:53:36.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-62m2c" for this suite.
Jan 25 11:53:42.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:53:43.007: INFO: namespace: e2e-tests-emptydir-62m2c, resource: bindings, ignored listing per whitelist
Jan 25 11:53:43.095: INFO: namespace e2e-tests-emptydir-62m2c deletion completed in 6.168642972s

• [SLOW TEST:17.130 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:53:43.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 11:53:43.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 25 11:53:43.460: INFO: stderr: ""
Jan 25 11:53:43.461: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 25 11:53:43.465: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:53:43.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9hzzp" for this suite.
Jan 25 11:53:49.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:53:49.627: INFO: namespace: e2e-tests-kubectl-9hzzp, resource: bindings, ignored listing per whitelist
Jan 25 11:53:49.718: INFO: namespace e2e-tests-kubectl-9hzzp deletion completed in 6.244014006s

S [SKIPPING] [6.622 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 25 11:53:43.465: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:53:49.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0125 11:53:53.174136       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 11:53:53.174: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:53:53.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hnwkd" for this suite.
Jan 25 11:54:01.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:54:01.513: INFO: namespace: e2e-tests-gc-hnwkd, resource: bindings, ignored listing per whitelist
Jan 25 11:54:01.769: INFO: namespace e2e-tests-gc-hnwkd deletion completed in 8.583683963s

• [SLOW TEST:12.050 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:54:01.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 11:54:02.258: INFO: Number of nodes with available pods: 0
Jan 25 11:54:02.258: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:03.725: INFO: Number of nodes with available pods: 0
Jan 25 11:54:03.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:04.358: INFO: Number of nodes with available pods: 0
Jan 25 11:54:04.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:05.286: INFO: Number of nodes with available pods: 0
Jan 25 11:54:05.286: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:06.290: INFO: Number of nodes with available pods: 0
Jan 25 11:54:06.290: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:08.238: INFO: Number of nodes with available pods: 0
Jan 25 11:54:08.239: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:08.388: INFO: Number of nodes with available pods: 0
Jan 25 11:54:08.388: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:09.476: INFO: Number of nodes with available pods: 0
Jan 25 11:54:09.477: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:10.291: INFO: Number of nodes with available pods: 0
Jan 25 11:54:10.292: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:11.293: INFO: Number of nodes with available pods: 0
Jan 25 11:54:11.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:12.298: INFO: Number of nodes with available pods: 1
Jan 25 11:54:12.298: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 25 11:54:12.491: INFO: Number of nodes with available pods: 0
Jan 25 11:54:12.492: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:13.511: INFO: Number of nodes with available pods: 0
Jan 25 11:54:13.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:14.596: INFO: Number of nodes with available pods: 0
Jan 25 11:54:14.596: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:15.511: INFO: Number of nodes with available pods: 0
Jan 25 11:54:15.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:16.551: INFO: Number of nodes with available pods: 0
Jan 25 11:54:16.551: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:17.515: INFO: Number of nodes with available pods: 0
Jan 25 11:54:17.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:18.581: INFO: Number of nodes with available pods: 0
Jan 25 11:54:18.582: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:19.510: INFO: Number of nodes with available pods: 0
Jan 25 11:54:19.510: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:20.520: INFO: Number of nodes with available pods: 0
Jan 25 11:54:20.520: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:21.514: INFO: Number of nodes with available pods: 0
Jan 25 11:54:21.514: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:22.548: INFO: Number of nodes with available pods: 0
Jan 25 11:54:22.548: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:25.515: INFO: Number of nodes with available pods: 0
Jan 25 11:54:25.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:26.679: INFO: Number of nodes with available pods: 0
Jan 25 11:54:26.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:27.513: INFO: Number of nodes with available pods: 0
Jan 25 11:54:27.513: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:28.570: INFO: Number of nodes with available pods: 0
Jan 25 11:54:28.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 11:54:29.527: INFO: Number of nodes with available pods: 1
Jan 25 11:54:29.527: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wqlpd, will wait for the garbage collector to delete the pods
Jan 25 11:54:29.626: INFO: Deleting DaemonSet.extensions daemon-set took: 29.193062ms
Jan 25 11:54:29.827: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.198842ms
Jan 25 11:54:36.832: INFO: Number of nodes with available pods: 0
Jan 25 11:54:36.832: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 11:54:36.838: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wqlpd/daemonsets","resourceVersion":"19407903"},"items":null}

Jan 25 11:54:36.842: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wqlpd/pods","resourceVersion":"19407903"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:54:36.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wqlpd" for this suite.
Jan 25 11:54:42.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:54:43.031: INFO: namespace: e2e-tests-daemonsets-wqlpd, resource: bindings, ignored listing per whitelist
Jan 25 11:54:43.111: INFO: namespace e2e-tests-daemonsets-wqlpd deletion completed in 6.243072203s

• [SLOW TEST:41.341 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:54:43.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 11:54:43.301: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 25 11:54:48.438: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 11:54:52.477: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 25 11:54:52.719: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nhcsh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nhcsh/deployments/test-cleanup-deployment,UID:7f205769-3f69-11ea-a994-fa163e34d433,ResourceVersion:19407956,Generation:1,CreationTimestamp:2020-01-25 11:54:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 11:54:52.722: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:54:52.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nhcsh" for this suite.
Jan 25 11:55:00.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:55:00.826: INFO: namespace: e2e-tests-deployment-nhcsh, resource: bindings, ignored listing per whitelist
Jan 25 11:55:00.925: INFO: namespace e2e-tests-deployment-nhcsh deletion completed in 8.190639122s

• [SLOW TEST:17.813 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:55:00.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 11:55:02.243: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.774635ms)
Jan 25 11:55:02.248: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.591043ms)
Jan 25 11:55:02.254: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.229174ms)
Jan 25 11:55:02.260: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.541064ms)
Jan 25 11:55:02.266: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.617866ms)
Jan 25 11:55:02.271: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.684895ms)
Jan 25 11:55:02.278: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.151328ms)
Jan 25 11:55:02.284: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.18286ms)
Jan 25 11:55:02.289: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.018219ms)
Jan 25 11:55:02.295: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.208489ms)
Jan 25 11:55:02.301: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.442624ms)
Jan 25 11:55:02.307: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.011117ms)
Jan 25 11:55:02.416: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 109.342811ms)
Jan 25 11:55:02.427: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.056103ms)
Jan 25 11:55:02.438: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.567719ms)
Jan 25 11:55:02.444: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.407426ms)
Jan 25 11:55:02.451: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.1871ms)
Jan 25 11:55:02.466: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.640486ms)
Jan 25 11:55:02.486: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.048065ms)
Jan 25 11:55:02.514: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.031269ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:55:02.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-6pxwp" for this suite.
Jan 25 11:55:08.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:55:08.724: INFO: namespace: e2e-tests-proxy-6pxwp, resource: bindings, ignored listing per whitelist
Jan 25 11:55:08.853: INFO: namespace e2e-tests-proxy-6pxwp deletion completed in 6.316524715s

• [SLOW TEST:7.927 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:55:08.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-88f12165-3f69-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 11:55:09.207: INFO: Waiting up to 5m0s for pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006" in namespace "e2e-tests-configmap-5kn4w" to be "success or failure"
Jan 25 11:55:09.219: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.832339ms
Jan 25 11:55:11.232: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024681108s
Jan 25 11:55:13.246: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038957797s
Jan 25 11:55:16.489: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.282599326s
Jan 25 11:55:18.519: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.312612876s
Jan 25 11:55:20.557: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.350512201s
STEP: Saw pod success
Jan 25 11:55:20.558: INFO: Pod "pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:55:20.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006 container configmap-volume-test: 
STEP: delete the pod
Jan 25 11:55:20.786: INFO: Waiting for pod pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:55:20.828: INFO: Pod pod-configmaps-890fcbf0-3f69-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:55:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5kn4w" for this suite.
Jan 25 11:55:28.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:55:29.053: INFO: namespace: e2e-tests-configmap-5kn4w, resource: bindings, ignored listing per whitelist
Jan 25 11:55:29.091: INFO: namespace e2e-tests-configmap-5kn4w deletion completed in 8.250258915s

• [SLOW TEST:20.237 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:55:29.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-9502f8f6-3f69-11ea-8a8b-0242ac110006
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-9502f8f6-3f69-11ea-8a8b-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:55:41.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w2n9t" for this suite.
Jan 25 11:56:05.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:56:05.629: INFO: namespace: e2e-tests-configmap-w2n9t, resource: bindings, ignored listing per whitelist
Jan 25 11:56:05.703: INFO: namespace e2e-tests-configmap-w2n9t deletion completed in 24.176770828s

• [SLOW TEST:36.612 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:56:05.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 25 11:56:05.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:06.384: INFO: stderr: ""
Jan 25 11:56:06.384: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 11:56:06.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:06.514: INFO: stderr: ""
Jan 25 11:56:06.515: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 25 11:56:11.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:14.755: INFO: stderr: ""
Jan 25 11:56:14.756: INFO: stdout: "update-demo-nautilus-mmh4s update-demo-nautilus-shx56 "
Jan 25 11:56:14.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmh4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:15.192: INFO: stderr: ""
Jan 25 11:56:15.192: INFO: stdout: ""
Jan 25 11:56:15.192: INFO: update-demo-nautilus-mmh4s is created but not running
Jan 25 11:56:20.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:20.410: INFO: stderr: ""
Jan 25 11:56:20.410: INFO: stdout: "update-demo-nautilus-mmh4s update-demo-nautilus-shx56 "
Jan 25 11:56:20.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmh4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:20.625: INFO: stderr: ""
Jan 25 11:56:20.626: INFO: stdout: "true"
Jan 25 11:56:20.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmh4s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:20.756: INFO: stderr: ""
Jan 25 11:56:20.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:56:20.757: INFO: validating pod update-demo-nautilus-mmh4s
Jan 25 11:56:20.769: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:56:20.769: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:56:20.769: INFO: update-demo-nautilus-mmh4s is verified up and running
Jan 25 11:56:20.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shx56 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:20.895: INFO: stderr: ""
Jan 25 11:56:20.895: INFO: stdout: "true"
Jan 25 11:56:20.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-shx56 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:21.016: INFO: stderr: ""
Jan 25 11:56:21.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:56:21.016: INFO: validating pod update-demo-nautilus-shx56
Jan 25 11:56:21.024: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:56:21.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:56:21.024: INFO: update-demo-nautilus-shx56 is verified up and running
STEP: rolling-update to new replication controller
Jan 25 11:56:21.027: INFO: scanned /root for discovery docs: 
Jan 25 11:56:21.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:56.431: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 11:56:56.432: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 11:56:56.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:56.692: INFO: stderr: ""
Jan 25 11:56:56.693: INFO: stdout: "update-demo-kitten-72xqn update-demo-kitten-cxbs4 "
Jan 25 11:56:56.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72xqn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:56.842: INFO: stderr: ""
Jan 25 11:56:56.842: INFO: stdout: "true"
Jan 25 11:56:56.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-72xqn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:56.970: INFO: stderr: ""
Jan 25 11:56:56.970: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 11:56:56.970: INFO: validating pod update-demo-kitten-72xqn
Jan 25 11:56:56.992: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 11:56:56.992: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 11:56:56.993: INFO: update-demo-kitten-72xqn is verified up and running
Jan 25 11:56:56.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cxbs4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:57.133: INFO: stderr: ""
Jan 25 11:56:57.134: INFO: stdout: "true"
Jan 25 11:56:57.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cxbs4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4pr7'
Jan 25 11:56:57.231: INFO: stderr: ""
Jan 25 11:56:57.231: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 11:56:57.232: INFO: validating pod update-demo-kitten-cxbs4
Jan 25 11:56:57.244: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 11:56:57.244: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 11:56:57.244: INFO: update-demo-kitten-cxbs4 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:56:57.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h4pr7" for this suite.
Jan 25 11:57:23.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:57:23.339: INFO: namespace: e2e-tests-kubectl-h4pr7, resource: bindings, ignored listing per whitelist
Jan 25 11:57:23.573: INFO: namespace e2e-tests-kubectl-h4pr7 deletion completed in 26.323938446s

• [SLOW TEST:77.870 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:57:23.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-d95ae2d9-3f69-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 11:57:24.013: INFO: Waiting up to 5m0s for pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-tzqsc" to be "success or failure"
Jan 25 11:57:24.028: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.85965ms
Jan 25 11:57:26.043: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029825169s
Jan 25 11:57:28.068: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055145867s
Jan 25 11:57:30.281: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267786169s
Jan 25 11:57:32.294: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280451425s
Jan 25 11:57:34.335: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321341058s
STEP: Saw pod success
Jan 25 11:57:34.335: INFO: Pod "pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 11:57:34.374: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:57:34.644: INFO: Waiting for pod pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006 to disappear
Jan 25 11:57:34.683: INFO: Pod pod-secrets-d95ec0e3-3f69-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:57:34.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tzqsc" for this suite.
Jan 25 11:57:42.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:57:43.072: INFO: namespace: e2e-tests-secrets-tzqsc, resource: bindings, ignored listing per whitelist
Jan 25 11:57:43.104: INFO: namespace e2e-tests-secrets-tzqsc deletion completed in 8.384880752s

• [SLOW TEST:19.531 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:57:43.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 25 11:57:43.278: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hl2nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-hl2nd/configmaps/e2e-watch-test-watch-closed,UID:e4debc1f-3f69-11ea-a994-fa163e34d433,ResourceVersion:19408394,Generation:0,CreationTimestamp:2020-01-25 11:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 11:57:43.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hl2nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-hl2nd/configmaps/e2e-watch-test-watch-closed,UID:e4debc1f-3f69-11ea-a994-fa163e34d433,ResourceVersion:19408395,Generation:0,CreationTimestamp:2020-01-25 11:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 25 11:57:43.300: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hl2nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-hl2nd/configmaps/e2e-watch-test-watch-closed,UID:e4debc1f-3f69-11ea-a994-fa163e34d433,ResourceVersion:19408396,Generation:0,CreationTimestamp:2020-01-25 11:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 11:57:43.300: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hl2nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-hl2nd/configmaps/e2e-watch-test-watch-closed,UID:e4debc1f-3f69-11ea-a994-fa163e34d433,ResourceVersion:19408397,Generation:0,CreationTimestamp:2020-01-25 11:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:57:43.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hl2nd" for this suite.
Jan 25 11:57:49.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:57:49.619: INFO: namespace: e2e-tests-watch-hl2nd, resource: bindings, ignored listing per whitelist
Jan 25 11:57:49.723: INFO: namespace e2e-tests-watch-hl2nd deletion completed in 6.405065115s

• [SLOW TEST:6.619 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:57:49.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 11:57:49.932: INFO: Creating ReplicaSet my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006
Jan 25 11:57:50.067: INFO: Pod name my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006: Found 0 pods out of 1
Jan 25 11:57:55.091: INFO: Pod name my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006: Found 1 pods out of 1
Jan 25 11:57:55.091: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006" is running
Jan 25 11:58:01.121: INFO: Pod "my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006-72fwz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:57:50 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:57:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:57:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 11:57:50 +0000 UTC Reason: Message:}])
Jan 25 11:58:01.121: INFO: Trying to dial the pod
Jan 25 11:58:06.162: INFO: Controller my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006: Got expected result from replica 1 [my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006-72fwz]: "my-hostname-basic-e8dfd59a-3f69-11ea-8a8b-0242ac110006-72fwz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:58:06.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-sgwbk" for this suite.
Jan 25 11:58:14.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:58:14.855: INFO: namespace: e2e-tests-replicaset-sgwbk, resource: bindings, ignored listing per whitelist
Jan 25 11:58:14.887: INFO: namespace e2e-tests-replicaset-sgwbk deletion completed in 8.718496864s

• [SLOW TEST:25.165 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:58:14.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 25 11:58:26.229: INFO: Pod pod-hostip-f86a9f53-3f69-11ea-8a8b-0242ac110006 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:58:26.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-sjqwh" for this suite.
Jan 25 11:58:50.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:58:50.358: INFO: namespace: e2e-tests-pods-sjqwh, resource: bindings, ignored listing per whitelist
Jan 25 11:58:50.588: INFO: namespace e2e-tests-pods-sjqwh deletion completed in 24.348350755s

• [SLOW TEST:35.700 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:58:50.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 11:59:00.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hvlwd" for this suite.
Jan 25 11:59:54.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 11:59:55.015: INFO: namespace: e2e-tests-kubelet-test-hvlwd, resource: bindings, ignored listing per whitelist
Jan 25 11:59:55.069: INFO: namespace e2e-tests-kubelet-test-hvlwd deletion completed in 54.186471471s

• [SLOW TEST:64.481 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 11:59:55.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 25 11:59:55.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-zbwkq run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 25 12:00:06.937: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0125 12:00:05.005935    2926 log.go:172] (0xc000630370) (0xc0007be500) Create stream\nI0125 12:00:05.006108    2926 log.go:172] (0xc000630370) (0xc0007be500) Stream added, broadcasting: 1\nI0125 12:00:05.058680    2926 log.go:172] (0xc000630370) Reply frame received for 1\nI0125 12:00:05.058795    2926 log.go:172] (0xc000630370) (0xc0005e5400) Create stream\nI0125 12:00:05.058811    2926 log.go:172] (0xc000630370) (0xc0005e5400) Stream added, broadcasting: 3\nI0125 12:00:05.063257    2926 log.go:172] (0xc000630370) Reply frame received for 3\nI0125 12:00:05.063310    2926 log.go:172] (0xc000630370) (0xc00097e000) Create stream\nI0125 12:00:05.063324    2926 log.go:172] (0xc000630370) (0xc00097e000) Stream added, broadcasting: 5\nI0125 12:00:05.071549    2926 log.go:172] (0xc000630370) Reply frame received for 5\nI0125 12:00:05.071656    2926 log.go:172] (0xc000630370) (0xc0005e54a0) Create stream\nI0125 12:00:05.071668    2926 log.go:172] (0xc000630370) (0xc0005e54a0) Stream added, broadcasting: 7\nI0125 12:00:05.093976    2926 log.go:172] (0xc000630370) Reply frame received for 7\nI0125 12:00:05.094302    2926 log.go:172] (0xc0005e5400) (3) Writing data frame\nI0125 12:00:05.094626    2926 log.go:172] (0xc0005e5400) (3) Writing data frame\nI0125 12:00:05.139062    2926 log.go:172] (0xc000630370) Data frame received for 5\nI0125 12:00:05.139113    2926 log.go:172] (0xc00097e000) (5) Data frame handling\nI0125 12:00:05.139147    2926 log.go:172] (0xc00097e000) (5) Data frame sent\nI0125 12:00:05.205961    2926 log.go:172] (0xc000630370) Data frame received for 5\nI0125 12:00:05.206033    2926 log.go:172] (0xc00097e000) (5) Data frame handling\nI0125 12:00:05.206101    2926 log.go:172] (0xc00097e000) (5) Data frame sent\nI0125 12:00:06.859000    2926 log.go:172] (0xc000630370) Data frame received for 1\nI0125 12:00:06.859445    2926 log.go:172] (0xc000630370) (0xc0005e54a0) Stream removed, broadcasting: 7\nI0125 12:00:06.859610    2926 log.go:172] (0xc0007be500) (1) Data frame handling\nI0125 12:00:06.859651    2926 log.go:172] (0xc0007be500) (1) Data frame sent\nI0125 12:00:06.859772    2926 log.go:172] (0xc000630370) (0xc0005e5400) Stream removed, broadcasting: 3\nI0125 12:00:06.859836    2926 log.go:172] (0xc000630370) (0xc0007be500) Stream removed, broadcasting: 1\nI0125 12:00:06.859935    2926 log.go:172] (0xc000630370) (0xc00097e000) Stream removed, broadcasting: 5\nI0125 12:00:06.859987    2926 log.go:172] (0xc000630370) Go away received\nI0125 12:00:06.860141    2926 log.go:172] (0xc000630370) (0xc0007be500) Stream removed, broadcasting: 1\nI0125 12:00:06.860226    2926 log.go:172] (0xc000630370) (0xc0005e5400) Stream removed, broadcasting: 3\nI0125 12:00:06.860254    2926 log.go:172] (0xc000630370) (0xc00097e000) Stream removed, broadcasting: 5\nI0125 12:00:06.860303    2926 log.go:172] (0xc000630370) (0xc0005e54a0) Stream removed, broadcasting: 7\n"
Jan 25 12:00:06.938: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:00:08.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zbwkq" for this suite.
Jan 25 12:00:15.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:00:15.226: INFO: namespace: e2e-tests-kubectl-zbwkq, resource: bindings, ignored listing per whitelist
Jan 25 12:00:15.290: INFO: namespace e2e-tests-kubectl-zbwkq deletion completed in 6.313145353s

• [SLOW TEST:20.220 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:00:15.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 25 12:00:16.465: INFO: Pod name wrapped-volume-race-402e7359-3f6a-11ea-8a8b-0242ac110006: Found 0 pods out of 5
Jan 25 12:00:21.496: INFO: Pod name wrapped-volume-race-402e7359-3f6a-11ea-8a8b-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-402e7359-3f6a-11ea-8a8b-0242ac110006 in namespace e2e-tests-emptydir-wrapper-4s5tj, will wait for the garbage collector to delete the pods
Jan 25 12:02:23.651: INFO: Deleting ReplicationController wrapped-volume-race-402e7359-3f6a-11ea-8a8b-0242ac110006 took: 25.21657ms
Jan 25 12:02:24.452: INFO: Terminating ReplicationController wrapped-volume-race-402e7359-3f6a-11ea-8a8b-0242ac110006 pods took: 801.049828ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 12:03:12.964: INFO: Pod name wrapped-volume-race-a94a5f58-3f6a-11ea-8a8b-0242ac110006: Found 0 pods out of 5
Jan 25 12:03:17.994: INFO: Pod name wrapped-volume-race-a94a5f58-3f6a-11ea-8a8b-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a94a5f58-3f6a-11ea-8a8b-0242ac110006 in namespace e2e-tests-emptydir-wrapper-4s5tj, will wait for the garbage collector to delete the pods
Jan 25 12:05:12.178: INFO: Deleting ReplicationController wrapped-volume-race-a94a5f58-3f6a-11ea-8a8b-0242ac110006 took: 30.392966ms
Jan 25 12:05:12.979: INFO: Terminating ReplicationController wrapped-volume-race-a94a5f58-3f6a-11ea-8a8b-0242ac110006 pods took: 800.608133ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 12:06:03.488: INFO: Pod name wrapped-volume-race-0efdfa4d-3f6b-11ea-8a8b-0242ac110006: Found 0 pods out of 5
Jan 25 12:06:08.607: INFO: Pod name wrapped-volume-race-0efdfa4d-3f6b-11ea-8a8b-0242ac110006: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0efdfa4d-3f6b-11ea-8a8b-0242ac110006 in namespace e2e-tests-emptydir-wrapper-4s5tj, will wait for the garbage collector to delete the pods
Jan 25 12:08:03.001: INFO: Deleting ReplicationController wrapped-volume-race-0efdfa4d-3f6b-11ea-8a8b-0242ac110006 took: 45.466398ms
Jan 25 12:08:03.403: INFO: Terminating ReplicationController wrapped-volume-race-0efdfa4d-3f6b-11ea-8a8b-0242ac110006 pods took: 401.460888ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:08:54.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4s5tj" for this suite.
Jan 25 12:09:05.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:09:05.158: INFO: namespace: e2e-tests-emptydir-wrapper-4s5tj, resource: bindings, ignored listing per whitelist
Jan 25 12:09:05.166: INFO: namespace e2e-tests-emptydir-wrapper-4s5tj deletion completed in 10.205634667s

• [SLOW TEST:529.875 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:09:05.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7b72b331-3f6b-11ea-8a8b-0242ac110006
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:09:21.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kccqd" for this suite.
Jan 25 12:09:59.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:09:59.616: INFO: namespace: e2e-tests-configmap-kccqd, resource: bindings, ignored listing per whitelist
Jan 25 12:09:59.881: INFO: namespace e2e-tests-configmap-kccqd deletion completed in 38.395090853s

• [SLOW TEST:54.715 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:09:59.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-9c10974d-3f6b-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 12:10:00.091: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-rjh2d" to be "success or failure"
Jan 25 12:10:00.104: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.510826ms
Jan 25 12:10:02.182: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090828257s
Jan 25 12:10:04.193: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101978872s
Jan 25 12:10:06.791: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.699773921s
Jan 25 12:10:08.839: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.747694392s
Jan 25 12:10:11.342: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.250409864s
STEP: Saw pod success
Jan 25 12:10:11.342: INFO: Pod "pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:10:11.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 12:10:12.043: INFO: Waiting for pod pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:10:12.120: INFO: Pod pod-projected-secrets-9c1136fe-3f6b-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:10:12.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rjh2d" for this suite.
Jan 25 12:10:18.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:10:18.383: INFO: namespace: e2e-tests-projected-rjh2d, resource: bindings, ignored listing per whitelist
Jan 25 12:10:18.425: INFO: namespace e2e-tests-projected-rjh2d deletion completed in 6.262263355s

• [SLOW TEST:18.544 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:10:18.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 12:13:23.263: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:23.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:25.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:25.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:27.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:27.379: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:29.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:29.371: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:31.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:31.371: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:33.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:33.373: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:35.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:35.381: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:37.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:37.367: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:39.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:39.383: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:41.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:41.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:43.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:43.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:45.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:45.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:47.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:47.367: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:49.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:49.406: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:51.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:51.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:53.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:53.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:55.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:55.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:57.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:57.373: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:13:59.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:13:59.438: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:01.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:01.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:03.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:03.364: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:05.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:05.363: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:07.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:07.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:09.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:09.381: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:11.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:11.384: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:13.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:13.387: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:15.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:15.366: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:17.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:17.366: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:19.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:19.371: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:21.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:21.361: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:23.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:23.369: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:25.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:25.372: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:27.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:27.372: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:29.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:29.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:31.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:31.367: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:33.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:33.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:35.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:35.369: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:37.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:37.368: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:39.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:39.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:41.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:41.370: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:43.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:43.372: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:45.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:45.371: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:47.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:47.369: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:49.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:49.361: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:51.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:51.371: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:53.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:53.363: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:55.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:55.369: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:57.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:57.367: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:14:59.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:14:59.365: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:15:01.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:15:01.363: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 12:15:03.349: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 12:15:03.369: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:15:03.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4hscc" for this suite.
Jan 25 12:15:27.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:15:27.633: INFO: namespace: e2e-tests-container-lifecycle-hook-4hscc, resource: bindings, ignored listing per whitelist
Jan 25 12:15:27.711: INFO: namespace e2e-tests-container-lifecycle-hook-4hscc deletion completed in 24.331824632s

• [SLOW TEST:309.287 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:15:27.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5f7db58b-3f6c-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 12:15:28.040: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-7chgl" to be "success or failure"
Jan 25 12:15:28.052: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.931627ms
Jan 25 12:15:30.356: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315430231s
Jan 25 12:15:32.377: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336605981s
Jan 25 12:15:34.402: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361076949s
Jan 25 12:15:36.420: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379106363s
Jan 25 12:15:38.435: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.394600068s
STEP: Saw pod success
Jan 25 12:15:38.435: INFO: Pod "pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:15:38.440: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 12:15:39.261: INFO: Waiting for pod pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:15:39.535: INFO: Pod pod-projected-configmaps-5f7e5c49-3f6c-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:15:39.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7chgl" for this suite.
Jan 25 12:15:45.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:15:45.980: INFO: namespace: e2e-tests-projected-7chgl, resource: bindings, ignored listing per whitelist
Jan 25 12:15:46.102: INFO: namespace e2e-tests-projected-7chgl deletion completed in 6.552924614s

• [SLOW TEST:18.391 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:15:46.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 12:15:56.678: INFO: Waiting up to 5m0s for pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006" in namespace "e2e-tests-pods-6qb2m" to be "success or failure"
Jan 25 12:15:56.824: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 145.860341ms
Jan 25 12:15:58.843: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164536811s
Jan 25 12:16:00.858: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17952962s
Jan 25 12:16:02.876: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197784265s
Jan 25 12:16:04.897: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.218713349s
STEP: Saw pod success
Jan 25 12:16:04.897: INFO: Pod "client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:16:04.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006 container env3cont: 
STEP: delete the pod
Jan 25 12:16:05.155: INFO: Waiting for pod client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:16:05.166: INFO: Pod client-envvars-7099977a-3f6c-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:16:05.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6qb2m" for this suite.
Jan 25 12:16:59.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:16:59.402: INFO: namespace: e2e-tests-pods-6qb2m, resource: bindings, ignored listing per whitelist
Jan 25 12:16:59.405: INFO: namespace e2e-tests-pods-6qb2m deletion completed in 54.228134804s

• [SLOW TEST:73.303 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:16:59.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-z2qdt
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 12:16:59.602: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 12:17:33.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-z2qdt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 12:17:33.978: INFO: >>> kubeConfig: /root/.kube/config
I0125 12:17:34.161252       8 log.go:172] (0xc001120370) (0xc000337040) Create stream
I0125 12:17:34.161434       8 log.go:172] (0xc001120370) (0xc000337040) Stream added, broadcasting: 1
I0125 12:17:34.169743       8 log.go:172] (0xc001120370) Reply frame received for 1
I0125 12:17:34.169801       8 log.go:172] (0xc001120370) (0xc000337400) Create stream
I0125 12:17:34.169815       8 log.go:172] (0xc001120370) (0xc000337400) Stream added, broadcasting: 3
I0125 12:17:34.170924       8 log.go:172] (0xc001120370) Reply frame received for 3
I0125 12:17:34.170960       8 log.go:172] (0xc001120370) (0xc00073c0a0) Create stream
I0125 12:17:34.170973       8 log.go:172] (0xc001120370) (0xc00073c0a0) Stream added, broadcasting: 5
I0125 12:17:34.172082       8 log.go:172] (0xc001120370) Reply frame received for 5
I0125 12:17:34.472839       8 log.go:172] (0xc001120370) Data frame received for 3
I0125 12:17:34.473004       8 log.go:172] (0xc000337400) (3) Data frame handling
I0125 12:17:34.473058       8 log.go:172] (0xc000337400) (3) Data frame sent
I0125 12:17:34.832274       8 log.go:172] (0xc001120370) Data frame received for 1
I0125 12:17:34.832454       8 log.go:172] (0xc001120370) (0xc000337400) Stream removed, broadcasting: 3
I0125 12:17:34.832567       8 log.go:172] (0xc000337040) (1) Data frame handling
I0125 12:17:34.832600       8 log.go:172] (0xc000337040) (1) Data frame sent
I0125 12:17:34.832616       8 log.go:172] (0xc001120370) (0xc000337040) Stream removed, broadcasting: 1
I0125 12:17:34.833248       8 log.go:172] (0xc001120370) (0xc00073c0a0) Stream removed, broadcasting: 5
I0125 12:17:34.833372       8 log.go:172] (0xc001120370) (0xc000337040) Stream removed, broadcasting: 1
I0125 12:17:34.833392       8 log.go:172] (0xc001120370) (0xc000337400) Stream removed, broadcasting: 3
I0125 12:17:34.833406       8 log.go:172] (0xc001120370) (0xc00073c0a0) Stream removed, broadcasting: 5
I0125 12:17:34.833718       8 log.go:172] (0xc001120370) Go away received
Jan 25 12:17:34.834: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:17:34.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-z2qdt" for this suite.
Jan 25 12:17:58.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:17:59.093: INFO: namespace: e2e-tests-pod-network-test-z2qdt, resource: bindings, ignored listing per whitelist
Jan 25 12:17:59.128: INFO: namespace e2e-tests-pod-network-test-z2qdt deletion completed in 24.266700367s

• [SLOW TEST:59.722 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:17:59.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 25 12:17:59.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-st29q'
Jan 25 12:18:01.854: INFO: stderr: ""
Jan 25 12:18:01.855: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 25 12:18:03.612: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:03.612: INFO: Found 0 / 1
Jan 25 12:18:03.952: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:03.952: INFO: Found 0 / 1
Jan 25 12:18:04.874: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:04.875: INFO: Found 0 / 1
Jan 25 12:18:05.875: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:05.876: INFO: Found 0 / 1
Jan 25 12:18:07.111: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:07.112: INFO: Found 0 / 1
Jan 25 12:18:07.930: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:07.930: INFO: Found 0 / 1
Jan 25 12:18:08.952: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:08.953: INFO: Found 0 / 1
Jan 25 12:18:09.877: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:09.878: INFO: Found 0 / 1
Jan 25 12:18:10.877: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:10.878: INFO: Found 1 / 1
Jan 25 12:18:10.878: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 12:18:10.886: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:18:10.887: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 25 12:18:10.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q'
Jan 25 12:18:11.146: INFO: stderr: ""
Jan 25 12:18:11.146: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Jan 12:18:09.436 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 12:18:09.436 # Server started, Redis version 3.2.12\n1:M 25 Jan 12:18:09.437 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 12:18:09.437 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 25 12:18:11.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q --tail=1'
Jan 25 12:18:11.401: INFO: stderr: ""
Jan 25 12:18:11.401: INFO: stdout: "1:M 25 Jan 12:18:09.437 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 25 12:18:11.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q --limit-bytes=1'
Jan 25 12:18:11.582: INFO: stderr: ""
Jan 25 12:18:11.582: INFO: stdout: " "
STEP: exposing timestamps
Jan 25 12:18:11.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q --tail=1 --timestamps'
Jan 25 12:18:11.705: INFO: stderr: ""
Jan 25 12:18:11.705: INFO: stdout: "2020-01-25T12:18:09.43917104Z 1:M 25 Jan 12:18:09.437 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 25 12:18:14.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q --since=1s'
Jan 25 12:18:14.342: INFO: stderr: ""
Jan 25 12:18:14.342: INFO: stdout: ""
Jan 25 12:18:14.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hrtvc redis-master --namespace=e2e-tests-kubectl-st29q --since=24h'
Jan 25 12:18:14.617: INFO: stderr: ""
Jan 25 12:18:14.618: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Jan 12:18:09.436 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 12:18:09.436 # Server started, Redis version 3.2.12\n1:M 25 Jan 12:18:09.437 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 12:18:09.437 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 25 12:18:14.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-st29q'
Jan 25 12:18:14.819: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 12:18:14.819: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 25 12:18:14.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-st29q'
Jan 25 12:18:15.105: INFO: stderr: "No resources found.\n"
Jan 25 12:18:15.105: INFO: stdout: ""
Jan 25 12:18:15.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-st29q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 12:18:15.336: INFO: stderr: ""
Jan 25 12:18:15.336: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:18:15.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-st29q" for this suite.
Jan 25 12:18:38.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:18:38.396: INFO: namespace: e2e-tests-kubectl-st29q, resource: bindings, ignored listing per whitelist
Jan 25 12:18:38.610: INFO: namespace e2e-tests-kubectl-st29q deletion completed in 23.220756375s

• [SLOW TEST:39.483 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:18:38.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d16a8462-3f6c-11ea-8a8b-0242ac110006
STEP: Creating configMap with name cm-test-opt-upd-d16a8543-3f6c-11ea-8a8b-0242ac110006
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d16a8462-3f6c-11ea-8a8b-0242ac110006
STEP: Updating configmap cm-test-opt-upd-d16a8543-3f6c-11ea-8a8b-0242ac110006
STEP: Creating configMap with name cm-test-opt-create-d16a8571-3f6c-11ea-8a8b-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:20:07.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vwrrl" for this suite.
Jan 25 12:20:33.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:20:33.487: INFO: namespace: e2e-tests-configmap-vwrrl, resource: bindings, ignored listing per whitelist
Jan 25 12:20:33.548: INFO: namespace e2e-tests-configmap-vwrrl deletion completed in 26.266215364s

• [SLOW TEST:114.936 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:20:33.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 12:20:33.892: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 12:20:33.975: INFO: Number of nodes with available pods: 0
Jan 25 12:20:33.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:34.998: INFO: Number of nodes with available pods: 0
Jan 25 12:20:34.998: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:36.240: INFO: Number of nodes with available pods: 0
Jan 25 12:20:36.240: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:37.001: INFO: Number of nodes with available pods: 0
Jan 25 12:20:37.001: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:38.009: INFO: Number of nodes with available pods: 0
Jan 25 12:20:38.009: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:38.987: INFO: Number of nodes with available pods: 0
Jan 25 12:20:38.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:40.563: INFO: Number of nodes with available pods: 0
Jan 25 12:20:40.564: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:41.634: INFO: Number of nodes with available pods: 0
Jan 25 12:20:41.634: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:42.003: INFO: Number of nodes with available pods: 0
Jan 25 12:20:42.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:43.000: INFO: Number of nodes with available pods: 0
Jan 25 12:20:43.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:20:44.014: INFO: Number of nodes with available pods: 1
Jan 25 12:20:44.014: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 25 12:20:44.158: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:45.215: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:46.222: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:47.462: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:48.214: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:49.218: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:50.273: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:51.214: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:51.214: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:52.209: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:52.210: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:53.221: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:53.221: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:54.249: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:54.249: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:55.218: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:55.218: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:56.220: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:56.220: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:57.230: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:57.230: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:58.220: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:58.220: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:20:59.218: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:20:59.219: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:21:00.221: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:21:00.221: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:21:01.218: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:21:01.218: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:21:02.214: INFO: Wrong image for pod: daemon-set-bhz6z. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 25 12:21:02.214: INFO: Pod daemon-set-bhz6z is not available
Jan 25 12:21:03.221: INFO: Pod daemon-set-bftvg is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 25 12:21:03.247: INFO: Number of nodes with available pods: 0
Jan 25 12:21:03.247: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:04.350: INFO: Number of nodes with available pods: 0
Jan 25 12:21:04.351: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:05.273: INFO: Number of nodes with available pods: 0
Jan 25 12:21:05.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:06.266: INFO: Number of nodes with available pods: 0
Jan 25 12:21:06.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:08.053: INFO: Number of nodes with available pods: 0
Jan 25 12:21:08.054: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:08.362: INFO: Number of nodes with available pods: 0
Jan 25 12:21:08.363: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:09.276: INFO: Number of nodes with available pods: 0
Jan 25 12:21:09.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:10.310: INFO: Number of nodes with available pods: 0
Jan 25 12:21:10.310: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:11.336: INFO: Number of nodes with available pods: 0
Jan 25 12:21:11.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 25 12:21:12.273: INFO: Number of nodes with available pods: 1
Jan 25 12:21:12.273: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bl9ts, will wait for the garbage collector to delete the pods
Jan 25 12:21:12.474: INFO: Deleting DaemonSet.extensions daemon-set took: 40.917008ms
Jan 25 12:21:12.675: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.062645ms
Jan 25 12:21:19.110: INFO: Number of nodes with available pods: 0
Jan 25 12:21:19.110: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 12:21:19.116: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bl9ts/daemonsets","resourceVersion":"19410965"},"items":null}

Jan 25 12:21:19.125: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bl9ts/pods","resourceVersion":"19410965"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:21:19.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bl9ts" for this suite.
Jan 25 12:21:25.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:21:25.255: INFO: namespace: e2e-tests-daemonsets-bl9ts, resource: bindings, ignored listing per whitelist
Jan 25 12:21:25.379: INFO: namespace e2e-tests-daemonsets-bl9ts deletion completed in 6.231706994s

• [SLOW TEST:51.831 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:21:25.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 12:21:25.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-7m2nv" to be "success or failure"
Jan 25 12:21:25.562: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 59.136645ms
Jan 25 12:21:27.773: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269447963s
Jan 25 12:21:29.802: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298670175s
Jan 25 12:21:31.971: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.468220185s
Jan 25 12:21:33.981: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.478153225s
Jan 25 12:21:35.996: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.49283855s
STEP: Saw pod success
Jan 25 12:21:35.996: INFO: Pod "downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:21:36.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 12:21:36.202: INFO: Waiting for pod downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:21:36.224: INFO: Pod downwardapi-volume-349aa997-3f6d-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:21:36.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7m2nv" for this suite.
Jan 25 12:21:42.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:21:42.386: INFO: namespace: e2e-tests-projected-7m2nv, resource: bindings, ignored listing per whitelist
Jan 25 12:21:42.610: INFO: namespace e2e-tests-projected-7m2nv deletion completed in 6.375026153s

• [SLOW TEST:17.231 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:21:42.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xn2hp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 12:21:42.837: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 12:22:13.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xn2hp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 12:22:13.152: INFO: >>> kubeConfig: /root/.kube/config
I0125 12:22:13.374713       8 log.go:172] (0xc001b18370) (0xc000c161e0) Create stream
I0125 12:22:13.375063       8 log.go:172] (0xc001b18370) (0xc000c161e0) Stream added, broadcasting: 1
I0125 12:22:13.448902       8 log.go:172] (0xc001b18370) Reply frame received for 1
I0125 12:22:13.449308       8 log.go:172] (0xc001b18370) (0xc001aed220) Create stream
I0125 12:22:13.449364       8 log.go:172] (0xc001b18370) (0xc001aed220) Stream added, broadcasting: 3
I0125 12:22:13.466898       8 log.go:172] (0xc001b18370) Reply frame received for 3
I0125 12:22:13.467212       8 log.go:172] (0xc001b18370) (0xc0020acd20) Create stream
I0125 12:22:13.467315       8 log.go:172] (0xc001b18370) (0xc0020acd20) Stream added, broadcasting: 5
I0125 12:22:13.474035       8 log.go:172] (0xc001b18370) Reply frame received for 5
I0125 12:22:13.814289       8 log.go:172] (0xc001b18370) Data frame received for 3
I0125 12:22:13.814381       8 log.go:172] (0xc001aed220) (3) Data frame handling
I0125 12:22:13.814411       8 log.go:172] (0xc001aed220) (3) Data frame sent
I0125 12:22:14.016820       8 log.go:172] (0xc001b18370) (0xc0020acd20) Stream removed, broadcasting: 5
I0125 12:22:14.017244       8 log.go:172] (0xc001b18370) Data frame received for 1
I0125 12:22:14.017505       8 log.go:172] (0xc001b18370) (0xc001aed220) Stream removed, broadcasting: 3
I0125 12:22:14.017683       8 log.go:172] (0xc000c161e0) (1) Data frame handling
I0125 12:22:14.017770       8 log.go:172] (0xc000c161e0) (1) Data frame sent
I0125 12:22:14.017794       8 log.go:172] (0xc001b18370) (0xc000c161e0) Stream removed, broadcasting: 1
I0125 12:22:14.017836       8 log.go:172] (0xc001b18370) Go away received
I0125 12:22:14.018703       8 log.go:172] (0xc001b18370) (0xc000c161e0) Stream removed, broadcasting: 1
I0125 12:22:14.018765       8 log.go:172] (0xc001b18370) (0xc001aed220) Stream removed, broadcasting: 3
I0125 12:22:14.018791       8 log.go:172] (0xc001b18370) (0xc0020acd20) Stream removed, broadcasting: 5
Jan 25 12:22:14.019: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:22:14.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xn2hp" for this suite.
Jan 25 12:22:38.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:22:38.178: INFO: namespace: e2e-tests-pod-network-test-xn2hp, resource: bindings, ignored listing per whitelist
Jan 25 12:22:38.282: INFO: namespace e2e-tests-pod-network-test-xn2hp deletion completed in 24.240533146s

• [SLOW TEST:55.670 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:22:38.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 25 12:22:38.482: INFO: PodSpec: initContainers in spec.initContainers
Jan 25 12:23:47.629: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-601ea494-3f6d-11ea-8a8b-0242ac110006", GenerateName:"", Namespace:"e2e-tests-init-container-bhcpf", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-bhcpf/pods/pod-init-601ea494-3f6d-11ea-8a8b-0242ac110006", UID:"6020af73-3f6d-11ea-a994-fa163e34d433", ResourceVersion:"19411264", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715551758, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"482779258"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5755m", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002884000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5755m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5755m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5755m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00289fd88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00201c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00289feb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00289fed0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00289fed8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00289fedc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715551758, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715551758, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715551758, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715551758, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002892060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025540e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002554150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c8d0245fb04bbc1dc416d3732541992f49f783f486e9c5abff9bf6d1e766cdbc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028920a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002892080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:23:47.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bhcpf" for this suite.
Jan 25 12:24:11.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:24:11.945: INFO: namespace: e2e-tests-init-container-bhcpf, resource: bindings, ignored listing per whitelist
Jan 25 12:24:11.948: INFO: namespace e2e-tests-init-container-bhcpf deletion completed in 24.260707883s

• [SLOW TEST:93.666 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:24:11.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-97f4b800-3f6d-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 12:24:12.286: INFO: Waiting up to 5m0s for pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-dkjch" to be "success or failure"
Jan 25 12:24:12.296: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.422382ms
Jan 25 12:24:14.339: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053275552s
Jan 25 12:24:16.348: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061650534s
Jan 25 12:24:18.624: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337697193s
Jan 25 12:24:20.638: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35203466s
Jan 25 12:24:22.682: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39552022s
Jan 25 12:24:24.763: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.476981794s
STEP: Saw pod success
Jan 25 12:24:24.763: INFO: Pod "pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:24:24.771: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan 25 12:24:24.950: INFO: Waiting for pod pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:24:24.959: INFO: Pod pod-secrets-97f6185a-3f6d-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:24:24.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dkjch" for this suite.
Jan 25 12:24:31.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:24:31.069: INFO: namespace: e2e-tests-secrets-dkjch, resource: bindings, ignored listing per whitelist
Jan 25 12:24:31.112: INFO: namespace e2e-tests-secrets-dkjch deletion completed in 6.139727976s

• [SLOW TEST:19.164 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:24:31.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0125 12:24:45.397223       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 12:24:45.397: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:24:45.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4lczq" for this suite.
Jan 25 12:25:07.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:25:07.776: INFO: namespace: e2e-tests-gc-4lczq, resource: bindings, ignored listing per whitelist
Jan 25 12:25:07.831: INFO: namespace e2e-tests-gc-4lczq deletion completed in 22.427587616s

• [SLOW TEST:36.718 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:25:07.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-pxqg
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 12:25:08.064: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pxqg" in namespace "e2e-tests-subpath-75zcq" to be "success or failure"
Jan 25 12:25:08.156: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 91.586226ms
Jan 25 12:25:10.177: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113128811s
Jan 25 12:25:12.228: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163447663s
Jan 25 12:25:15.226: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 7.16150323s
Jan 25 12:25:17.398: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.333829232s
Jan 25 12:25:19.426: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 11.361903891s
Jan 25 12:25:21.446: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.382192547s
Jan 25 12:25:23.694: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.62971222s
Jan 25 12:25:25.711: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.64670919s
Jan 25 12:25:27.732: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 19.667962186s
Jan 25 12:25:29.750: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 21.68603761s
Jan 25 12:25:31.770: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 23.705503204s
Jan 25 12:25:33.800: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 25.735674578s
Jan 25 12:25:35.818: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 27.75421767s
Jan 25 12:25:37.850: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 29.785658826s
Jan 25 12:25:39.878: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 31.814035206s
Jan 25 12:25:41.899: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Running", Reason="", readiness=false. Elapsed: 33.834499846s
Jan 25 12:25:44.026: INFO: Pod "pod-subpath-test-downwardapi-pxqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.961899477s
STEP: Saw pod success
Jan 25 12:25:44.026: INFO: Pod "pod-subpath-test-downwardapi-pxqg" satisfied condition "success or failure"
Jan 25 12:25:44.055: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-pxqg container test-container-subpath-downwardapi-pxqg: 
STEP: delete the pod
Jan 25 12:25:44.403: INFO: Waiting for pod pod-subpath-test-downwardapi-pxqg to disappear
Jan 25 12:25:44.419: INFO: Pod pod-subpath-test-downwardapi-pxqg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-pxqg
Jan 25 12:25:44.419: INFO: Deleting pod "pod-subpath-test-downwardapi-pxqg" in namespace "e2e-tests-subpath-75zcq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:25:44.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-75zcq" for this suite.
Jan 25 12:25:50.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:25:50.713: INFO: namespace: e2e-tests-subpath-75zcq, resource: bindings, ignored listing per whitelist
Jan 25 12:25:50.834: INFO: namespace e2e-tests-subpath-75zcq deletion completed in 6.401244796s

• [SLOW TEST:43.003 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:25:50.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:26:49.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-dh9g8" for this suite.
Jan 25 12:26:55.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:26:55.456: INFO: namespace: e2e-tests-container-runtime-dh9g8, resource: bindings, ignored listing per whitelist
Jan 25 12:26:55.561: INFO: namespace e2e-tests-container-runtime-dh9g8 deletion completed in 6.305178462s

• [SLOW TEST:64.727 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:26:55.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 12:26:55.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xp2x7'
Jan 25 12:26:56.005: INFO: stderr: ""
Jan 25 12:26:56.005: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 25 12:27:06.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xp2x7 -o json'
Jan 25 12:27:06.257: INFO: stderr: ""
Jan 25 12:27:06.258: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-25T12:26:55Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-xp2x7\",\n        \"resourceVersion\": \"19411776\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-xp2x7/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f996db64-3f6d-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vkq88\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vkq88\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vkq88\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T12:26:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T12:27:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T12:27:04Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-25T12:26:55Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b283dc17ffc80453262f6641f3a087f8d1075671405dd894a0fcf426adde8a02\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-25T12:27:03Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-25T12:26:56Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 25 12:27:06.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-xp2x7'
Jan 25 12:27:06.681: INFO: stderr: ""
Jan 25 12:27:06.681: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 25 12:27:06.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xp2x7'
Jan 25 12:27:15.506: INFO: stderr: ""
Jan 25 12:27:15.506: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:27:15.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xp2x7" for this suite.
Jan 25 12:27:21.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:27:21.676: INFO: namespace: e2e-tests-kubectl-xp2x7, resource: bindings, ignored listing per whitelist
Jan 25 12:27:21.826: INFO: namespace e2e-tests-kubectl-xp2x7 deletion completed in 6.306885391s

• [SLOW TEST:26.264 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:27:21.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 12:27:40.281: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:40.303: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:42.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:42.321: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:44.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:44.355: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:46.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:46.323: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:48.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:48.326: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:50.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:50.327: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:52.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:52.328: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:54.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:54.392: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:56.305: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:56.329: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:27:58.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:27:58.326: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:00.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:00.601: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:02.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:02.313: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:04.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:04.332: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:06.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:06.322: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:08.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:08.323: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:10.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:10.323: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:12.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:12.317: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 12:28:14.304: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 12:28:14.322: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:28:14.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nq9n7" for this suite.
Jan 25 12:28:42.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:28:42.807: INFO: namespace: e2e-tests-container-lifecycle-hook-nq9n7, resource: bindings, ignored listing per whitelist
Jan 25 12:28:42.807: INFO: namespace e2e-tests-container-lifecycle-hook-nq9n7 deletion completed in 28.415632792s

• [SLOW TEST:80.980 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:28:42.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 12:28:43.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-dg44f" to be "success or failure"
Jan 25 12:28:43.043: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.047354ms
Jan 25 12:28:45.070: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033527119s
Jan 25 12:28:47.094: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05710979s
Jan 25 12:28:49.570: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533377102s
Jan 25 12:28:51.589: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552597885s
Jan 25 12:28:53.630: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.593501243s
STEP: Saw pod success
Jan 25 12:28:53.630: INFO: Pod "downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:28:53.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 12:28:53.861: INFO: Waiting for pod downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:28:54.077: INFO: Pod downwardapi-volume-39661b4d-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:28:54.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dg44f" for this suite.
Jan 25 12:29:02.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:29:02.250: INFO: namespace: e2e-tests-downward-api-dg44f, resource: bindings, ignored listing per whitelist
Jan 25 12:29:02.414: INFO: namespace e2e-tests-downward-api-dg44f deletion completed in 8.304377306s

• [SLOW TEST:19.608 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:29:02.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 25 12:29:02.832: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:29:25.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lk95j" for this suite.
Jan 25 12:29:43.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:29:43.924: INFO: namespace: e2e-tests-init-container-lk95j, resource: bindings, ignored listing per whitelist
Jan 25 12:29:44.015: INFO: namespace e2e-tests-init-container-lk95j deletion completed in 18.370948254s

• [SLOW TEST:41.600 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:29:44.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 25 12:29:52.917: INFO: Successfully updated pod "labelsupdate5de60803-3f6e-11ea-8a8b-0242ac110006"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:29:55.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hrzv9" for this suite.
Jan 25 12:30:21.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:30:21.160: INFO: namespace: e2e-tests-projected-hrzv9, resource: bindings, ignored listing per whitelist
Jan 25 12:30:21.409: INFO: namespace e2e-tests-projected-hrzv9 deletion completed in 26.326761332s

• [SLOW TEST:37.393 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:30:21.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 25 12:30:21.542: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:30:38.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-m54j8" for this suite.
Jan 25 12:30:44.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:30:44.869: INFO: namespace: e2e-tests-init-container-m54j8, resource: bindings, ignored listing per whitelist
Jan 25 12:30:44.903: INFO: namespace e2e-tests-init-container-m54j8 deletion completed in 6.145352875s

• [SLOW TEST:23.493 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:30:44.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:30:55.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rljhb" for this suite.
Jan 25 12:31:01.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:31:01.890: INFO: namespace: e2e-tests-emptydir-wrapper-rljhb, resource: bindings, ignored listing per whitelist
Jan 25 12:31:01.890: INFO: namespace e2e-tests-emptydir-wrapper-rljhb deletion completed in 6.222716419s

• [SLOW TEST:16.986 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:31:01.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 25 12:31:02.209: INFO: Waiting up to 5m0s for pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-2hd7f" to be "success or failure"
Jan 25 12:31:02.238: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 28.713692ms
Jan 25 12:31:04.267: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058303091s
Jan 25 12:31:06.286: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077340337s
Jan 25 12:31:08.385: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176132654s
Jan 25 12:31:10.406: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 8.197362106s
Jan 25 12:31:12.466: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Running", Reason="", readiness=true. Elapsed: 10.25666192s
Jan 25 12:31:14.711: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.501681638s
STEP: Saw pod success
Jan 25 12:31:14.711: INFO: Pod "downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:31:14.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan 25 12:31:15.366: INFO: Waiting for pod downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:31:15.395: INFO: Pod downward-api-8c58947c-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:31:15.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2hd7f" for this suite.
Jan 25 12:31:21.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:31:21.652: INFO: namespace: e2e-tests-downward-api-2hd7f, resource: bindings, ignored listing per whitelist
Jan 25 12:31:21.685: INFO: namespace e2e-tests-downward-api-2hd7f deletion completed in 6.273993075s

• [SLOW TEST:19.795 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:31:21.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-98130db4-3f6e-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 12:31:21.875: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-gv764" to be "success or failure"
Jan 25 12:31:21.910: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 35.018037ms
Jan 25 12:31:24.027: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151914414s
Jan 25 12:31:26.047: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172321579s
Jan 25 12:31:28.073: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197811423s
Jan 25 12:31:30.208: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333274983s
Jan 25 12:31:32.223: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.348478272s
STEP: Saw pod success
Jan 25 12:31:32.224: INFO: Pod "pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:31:32.228: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 12:31:32.617: INFO: Waiting for pod pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:31:32.630: INFO: Pod pod-projected-secrets-981432e3-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:31:32.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gv764" for this suite.
Jan 25 12:31:38.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:31:38.916: INFO: namespace: e2e-tests-projected-gv764, resource: bindings, ignored listing per whitelist
Jan 25 12:31:39.011: INFO: namespace e2e-tests-projected-gv764 deletion completed in 6.367554781s

• [SLOW TEST:17.326 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:31:39.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 12:31:59.389: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:31:59.432: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:01.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:01.445: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:03.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:03.940: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:05.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:05.474: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:07.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:07.448: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:09.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:09.456: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:11.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:11.446: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 12:32:13.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 12:32:13.463: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:32:13.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cdhc5" for this suite.
Jan 25 12:32:37.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:32:37.693: INFO: namespace: e2e-tests-container-lifecycle-hook-cdhc5, resource: bindings, ignored listing per whitelist
Jan 25 12:32:37.743: INFO: namespace e2e-tests-container-lifecycle-hook-cdhc5 deletion completed in 24.258088316s

• [SLOW TEST:58.731 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:32:37.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-7bvn
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 12:32:37.973: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7bvn" in namespace "e2e-tests-subpath-rs5fg" to be "success or failure"
Jan 25 12:32:37.997: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 24.406369ms
Jan 25 12:32:40.025: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052060814s
Jan 25 12:32:42.048: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075302066s
Jan 25 12:32:44.301: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327602558s
Jan 25 12:32:46.323: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349722257s
Jan 25 12:32:48.553: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.580365746s
Jan 25 12:32:50.752: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778830598s
Jan 25 12:32:52.791: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.817732356s
Jan 25 12:32:54.805: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 16.831844276s
Jan 25 12:32:56.832: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 18.859088031s
Jan 25 12:32:58.842: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 20.86888252s
Jan 25 12:33:00.876: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 22.902770762s
Jan 25 12:33:02.893: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 24.919893515s
Jan 25 12:33:04.910: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 26.937304625s
Jan 25 12:33:06.928: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 28.954601773s
Jan 25 12:33:08.946: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 30.972569363s
Jan 25 12:33:11.227: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Running", Reason="", readiness=false. Elapsed: 33.254170937s
Jan 25 12:33:13.381: INFO: Pod "pod-subpath-test-configmap-7bvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.408207729s
STEP: Saw pod success
Jan 25 12:33:13.382: INFO: Pod "pod-subpath-test-configmap-7bvn" satisfied condition "success or failure"
Jan 25 12:33:13.393: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7bvn container test-container-subpath-configmap-7bvn: 
STEP: delete the pod
Jan 25 12:33:14.259: INFO: Waiting for pod pod-subpath-test-configmap-7bvn to disappear
Jan 25 12:33:14.285: INFO: Pod pod-subpath-test-configmap-7bvn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7bvn
Jan 25 12:33:14.285: INFO: Deleting pod "pod-subpath-test-configmap-7bvn" in namespace "e2e-tests-subpath-rs5fg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:33:14.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rs5fg" for this suite.
Jan 25 12:33:20.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:33:20.621: INFO: namespace: e2e-tests-subpath-rs5fg, resource: bindings, ignored listing per whitelist
Jan 25 12:33:20.664: INFO: namespace e2e-tests-subpath-rs5fg deletion completed in 6.293790374s

• [SLOW TEST:42.921 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:33:20.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 25 12:33:20.817: INFO: Waiting up to 5m0s for pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-j2lz2" to be "success or failure"
Jan 25 12:33:20.925: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 108.410899ms
Jan 25 12:33:22.945: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128420044s
Jan 25 12:33:24.972: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154823004s
Jan 25 12:33:27.481: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664157469s
Jan 25 12:33:29.518: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.701282341s
Jan 25 12:33:31.574: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757225486s
Jan 25 12:33:33.636: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.819569927s
STEP: Saw pod success
Jan 25 12:33:33.637: INFO: Pod "downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:33:33.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan 25 12:33:33.969: INFO: Waiting for pod downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:33:34.144: INFO: Pod downward-api-def3d66c-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:33:34.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j2lz2" for this suite.
Jan 25 12:33:40.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:33:40.567: INFO: namespace: e2e-tests-downward-api-j2lz2, resource: bindings, ignored listing per whitelist
Jan 25 12:33:40.589: INFO: namespace e2e-tests-downward-api-j2lz2 deletion completed in 6.425462946s

• [SLOW TEST:19.925 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:33:40.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 25 12:33:40.888: INFO: Waiting up to 5m0s for pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-rlxmc" to be "success or failure"
Jan 25 12:33:40.901: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.559138ms
Jan 25 12:33:43.175: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286503744s
Jan 25 12:33:45.188: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29909646s
Jan 25 12:33:47.205: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.316686598s
Jan 25 12:33:49.217: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.328432722s
Jan 25 12:33:52.079: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.189907916s
STEP: Saw pod success
Jan 25 12:33:52.079: INFO: Pod "pod-eae3c082-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:33:52.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eae3c082-3f6e-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 12:33:52.747: INFO: Waiting for pod pod-eae3c082-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:33:52.767: INFO: Pod pod-eae3c082-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:33:52.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rlxmc" for this suite.
Jan 25 12:33:58.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:33:58.985: INFO: namespace: e2e-tests-emptydir-rlxmc, resource: bindings, ignored listing per whitelist
Jan 25 12:33:59.064: INFO: namespace e2e-tests-emptydir-rlxmc deletion completed in 6.277566565s

• [SLOW TEST:18.474 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:33:59.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f5f0601c-3f6e-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 12:33:59.606: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-jprdz" to be "success or failure"
Jan 25 12:33:59.622: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046029ms
Jan 25 12:34:02.062: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456131448s
Jan 25 12:34:04.082: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476365958s
Jan 25 12:34:06.607: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.001444195s
Jan 25 12:34:08.633: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027268948s
Jan 25 12:34:10.753: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.14696653s
STEP: Saw pod success
Jan 25 12:34:10.753: INFO: Pod "pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:34:10.776: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 12:34:10.961: INFO: Waiting for pod pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:34:11.097: INFO: Pod pod-projected-configmaps-f614c379-3f6e-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:34:11.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jprdz" for this suite.
Jan 25 12:34:17.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:34:17.306: INFO: namespace: e2e-tests-projected-jprdz, resource: bindings, ignored listing per whitelist
Jan 25 12:34:17.334: INFO: namespace e2e-tests-projected-jprdz deletion completed in 6.221231223s

• [SLOW TEST:18.270 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:34:17.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 25 12:34:28.134: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0109a56d-3f6f-11ea-8a8b-0242ac110006", GenerateName:"", Namespace:"e2e-tests-pods-wsh2f", SelfLink:"/api/v1/namespaces/e2e-tests-pods-wsh2f/pods/pod-submit-remove-0109a56d-3f6f-11ea-8a8b-0242ac110006", UID:"010df089-3f6f-11ea-a994-fa163e34d433", ResourceVersion:"19412730", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715552457, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"955081123"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fht84", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0021c4580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fht84", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00176f7f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002615b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00176f830)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00176f850)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00176f858), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00176f85c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715552458, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715552467, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715552467, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715552458, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00208d620), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00208d640), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://fcab38142ba9676999932ecda6aa4826d3a8d438cce155e2b7d39bd13bad4e8b"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:34:42.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wsh2f" for this suite.
Jan 25 12:34:48.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:34:48.827: INFO: namespace: e2e-tests-pods-wsh2f, resource: bindings, ignored listing per whitelist
Jan 25 12:34:48.917: INFO: namespace e2e-tests-pods-wsh2f deletion completed in 6.241148443s

• [SLOW TEST:31.582 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:34:48.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 12:34:49.181: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 25 12:34:49.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6l2sj/daemonsets","resourceVersion":"19412775"},"items":null}

Jan 25 12:34:49.200: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6l2sj/pods","resourceVersion":"19412775"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:34:49.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6l2sj" for this suite.
Jan 25 12:34:55.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:34:55.358: INFO: namespace: e2e-tests-daemonsets-6l2sj, resource: bindings, ignored listing per whitelist
Jan 25 12:34:55.367: INFO: namespace e2e-tests-daemonsets-6l2sj deletion completed in 6.143673974s

S [SKIPPING] [6.450 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 25 12:34:49.181: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:34:55.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 12:34:55.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:34:57.711: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 12:34:57.711: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 25 12:34:57.733: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 25 12:34:57.812: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 25 12:34:57.843: INFO: scanned /root for discovery docs: 
Jan 25 12:34:57.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:35:24.992: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 12:35:24.992: INFO: stdout: "Created e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc\nScaling up e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 25 12:35:24.992: INFO: stdout: "Created e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc\nScaling up e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 25 12:35:24.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:35:25.122: INFO: stderr: ""
Jan 25 12:35:25.122: INFO: stdout: "e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc-q47bf "
Jan 25 12:35:25.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc-q47bf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:35:25.239: INFO: stderr: ""
Jan 25 12:35:25.239: INFO: stdout: "true"
Jan 25 12:35:25.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc-q47bf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:35:25.333: INFO: stderr: ""
Jan 25 12:35:25.333: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 25 12:35:25.333: INFO: e2e-test-nginx-rc-f351013b513f8b37f49ceeabb626aacc-q47bf is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 25 12:35:25.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-prdwz'
Jan 25 12:35:25.477: INFO: stderr: ""
Jan 25 12:35:25.477: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:35:25.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-prdwz" for this suite.
Jan 25 12:35:49.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:35:49.729: INFO: namespace: e2e-tests-kubectl-prdwz, resource: bindings, ignored listing per whitelist
Jan 25 12:35:49.807: INFO: namespace e2e-tests-kubectl-prdwz deletion completed in 24.240959371s

• [SLOW TEST:54.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:35:49.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 25 12:36:00.779: INFO: Successfully updated pod "pod-update-3802a3ae-3f6f-11ea-8a8b-0242ac110006"
STEP: verifying the updated pod is in kubernetes
Jan 25 12:36:00.898: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:36:00.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d4jkb" for this suite.
Jan 25 12:36:24.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:36:25.060: INFO: namespace: e2e-tests-pods-d4jkb, resource: bindings, ignored listing per whitelist
Jan 25 12:36:25.125: INFO: namespace e2e-tests-pods-d4jkb deletion completed in 24.208679185s

• [SLOW TEST:35.318 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:36:25.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-4cfac6ca-3f6f-11ea-8a8b-0242ac110006
STEP: Creating secret with name s-test-opt-upd-4cfac970-3f6f-11ea-8a8b-0242ac110006
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4cfac6ca-3f6f-11ea-8a8b-0242ac110006
STEP: Updating secret s-test-opt-upd-4cfac970-3f6f-11ea-8a8b-0242ac110006
STEP: Creating secret with name s-test-opt-create-4cfac9ab-3f6f-11ea-8a8b-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:36:44.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6q68j" for this suite.
Jan 25 12:37:08.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:37:08.312: INFO: namespace: e2e-tests-projected-6q68j, resource: bindings, ignored listing per whitelist
Jan 25 12:37:08.419: INFO: namespace e2e-tests-projected-6q68j deletion completed in 24.394917696s

• [SLOW TEST:43.294 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:37:08.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 25 12:37:22.149: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:37:24.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-wrrm5" for this suite.
Jan 25 12:37:48.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:37:48.232: INFO: namespace: e2e-tests-replicaset-wrrm5, resource: bindings, ignored listing per whitelist
Jan 25 12:37:48.332: INFO: namespace e2e-tests-replicaset-wrrm5 deletion completed in 24.238939664s

• [SLOW TEST:39.912 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:37:48.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 25 12:37:48.729: INFO: Waiting up to 5m0s for pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-k2twv" to be "success or failure"
Jan 25 12:37:48.765: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 34.976503ms
Jan 25 12:37:50.812: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082013674s
Jan 25 12:37:52.836: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106248709s
Jan 25 12:37:55.034: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304509939s
Jan 25 12:37:57.049: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319099408s
Jan 25 12:37:59.064: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.333976072s
STEP: Saw pod success
Jan 25 12:37:59.064: INFO: Pod "pod-7ea82162-3f6f-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:37:59.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7ea82162-3f6f-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 12:37:59.150: INFO: Waiting for pod pod-7ea82162-3f6f-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:37:59.157: INFO: Pod pod-7ea82162-3f6f-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:37:59.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k2twv" for this suite.
Jan 25 12:38:05.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:38:05.707: INFO: namespace: e2e-tests-emptydir-k2twv, resource: bindings, ignored listing per whitelist
Jan 25 12:38:05.731: INFO: namespace e2e-tests-emptydir-k2twv deletion completed in 6.567174441s

• [SLOW TEST:17.399 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:38:05.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6kbzk
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 12:38:06.000: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 12:38:46.176: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6kbzk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 12:38:46.176: INFO: >>> kubeConfig: /root/.kube/config
I0125 12:38:46.298818       8 log.go:172] (0xc000937760) (0xc001a11680) Create stream
I0125 12:38:46.299045       8 log.go:172] (0xc000937760) (0xc001a11680) Stream added, broadcasting: 1
I0125 12:38:46.305382       8 log.go:172] (0xc000937760) Reply frame received for 1
I0125 12:38:46.305433       8 log.go:172] (0xc000937760) (0xc001a117c0) Create stream
I0125 12:38:46.305449       8 log.go:172] (0xc000937760) (0xc001a117c0) Stream added, broadcasting: 3
I0125 12:38:46.309885       8 log.go:172] (0xc000937760) Reply frame received for 3
I0125 12:38:46.310094       8 log.go:172] (0xc000937760) (0xc0010b8820) Create stream
I0125 12:38:46.310153       8 log.go:172] (0xc000937760) (0xc0010b8820) Stream added, broadcasting: 5
I0125 12:38:46.316143       8 log.go:172] (0xc000937760) Reply frame received for 5
I0125 12:38:46.755450       8 log.go:172] (0xc000937760) Data frame received for 3
I0125 12:38:46.755606       8 log.go:172] (0xc001a117c0) (3) Data frame handling
I0125 12:38:46.755640       8 log.go:172] (0xc001a117c0) (3) Data frame sent
I0125 12:38:46.924970       8 log.go:172] (0xc000937760) Data frame received for 1
I0125 12:38:46.925164       8 log.go:172] (0xc000937760) (0xc001a117c0) Stream removed, broadcasting: 3
I0125 12:38:46.925262       8 log.go:172] (0xc001a11680) (1) Data frame handling
I0125 12:38:46.925295       8 log.go:172] (0xc001a11680) (1) Data frame sent
I0125 12:38:46.925442       8 log.go:172] (0xc000937760) (0xc0010b8820) Stream removed, broadcasting: 5
I0125 12:38:46.925494       8 log.go:172] (0xc000937760) (0xc001a11680) Stream removed, broadcasting: 1
I0125 12:38:46.925514       8 log.go:172] (0xc000937760) Go away received
I0125 12:38:46.926330       8 log.go:172] (0xc000937760) (0xc001a11680) Stream removed, broadcasting: 1
I0125 12:38:46.926359       8 log.go:172] (0xc000937760) (0xc001a117c0) Stream removed, broadcasting: 3
I0125 12:38:46.926366       8 log.go:172] (0xc000937760) (0xc0010b8820) Stream removed, broadcasting: 5
Jan 25 12:38:46.926: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:38:46.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-6kbzk" for this suite.
Jan 25 12:39:10.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:39:11.181: INFO: namespace: e2e-tests-pod-network-test-6kbzk, resource: bindings, ignored listing per whitelist
Jan 25 12:39:11.185: INFO: namespace e2e-tests-pod-network-test-6kbzk deletion completed in 24.242102297s

• [SLOW TEST:65.454 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:39:11.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:39:19.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-79vgm" for this suite.
Jan 25 12:40:04.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:40:04.273: INFO: namespace: e2e-tests-kubelet-test-79vgm, resource: bindings, ignored listing per whitelist
Jan 25 12:40:04.307: INFO: namespace e2e-tests-kubelet-test-79vgm deletion completed in 44.359888629s

• [SLOW TEST:53.121 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:40:04.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 25 12:40:04.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6njx4'
Jan 25 12:40:04.746: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 12:40:04.746: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 25 12:40:04.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-6njx4'
Jan 25 12:40:04.960: INFO: stderr: ""
Jan 25 12:40:04.960: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:40:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6njx4" for this suite.
Jan 25 12:40:29.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:40:29.181: INFO: namespace: e2e-tests-kubectl-6njx4, resource: bindings, ignored listing per whitelist
Jan 25 12:40:29.927: INFO: namespace e2e-tests-kubectl-6njx4 deletion completed in 24.939045658s

• [SLOW TEST:25.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:40:29.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-69k85 in namespace e2e-tests-proxy-7mhc9
I0125 12:40:30.144928       8 runners.go:184] Created replication controller with name: proxy-service-69k85, namespace: e2e-tests-proxy-7mhc9, replica count: 1
I0125 12:40:31.196201       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:32.196885       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:33.197404       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:34.198003       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:35.198814       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:36.199472       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:37.200026       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 12:40:38.200645       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:39.201301       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:40.202172       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:41.202960       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:42.204186       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:43.205137       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:44.205658       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 12:40:45.206574       8 runners.go:184] proxy-service-69k85 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 12:40:45.226: INFO: setup took 15.210489333s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 25 12:40:45.273: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7mhc9/pods/http:proxy-service-69k85-5ghkq:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 12:41:29.637: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:29.660: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:31.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:32.746: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:33.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:33.674: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:35.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:35.726: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:37.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:37.676: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:39.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:39.689: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:41.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:41.678: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 12:41:43.661: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 12:41:43.676: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:41:43.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hkjdp" for this suite.
Jan 25 12:42:07.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:42:07.903: INFO: namespace: e2e-tests-container-lifecycle-hook-hkjdp, resource: bindings, ignored listing per whitelist
Jan 25 12:42:08.055: INFO: namespace e2e-tests-container-lifecycle-hook-hkjdp deletion completed in 24.332039799s

• [SLOW TEST:58.899 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:42:08.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:42:18.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xrnrn" for this suite.
Jan 25 12:43:04.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:43:04.417: INFO: namespace: e2e-tests-kubelet-test-xrnrn, resource: bindings, ignored listing per whitelist
Jan 25 12:43:04.547: INFO: namespace e2e-tests-kubelet-test-xrnrn deletion completed in 46.206668848s

• [SLOW TEST:56.492 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:43:04.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-69l7p
Jan 25 12:43:14.779: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-69l7p
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 12:43:14.787: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:47:15.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-69l7p" for this suite.
Jan 25 12:47:21.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:47:21.902: INFO: namespace: e2e-tests-container-probe-69l7p, resource: bindings, ignored listing per whitelist
Jan 25 12:47:21.905: INFO: namespace e2e-tests-container-probe-69l7p deletion completed in 6.194513531s

• [SLOW TEST:257.357 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:47:21.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 12:47:22.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-n6q6q" to be "success or failure"
Jan 25 12:47:22.384: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 163.657365ms
Jan 25 12:47:24.397: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177406624s
Jan 25 12:47:26.413: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192985102s
Jan 25 12:47:28.535: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315431901s
Jan 25 12:47:30.568: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347555058s
Jan 25 12:47:33.457: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.236608426s
STEP: Saw pod success
Jan 25 12:47:33.457: INFO: Pod "downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:47:33.473: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 12:47:34.092: INFO: Waiting for pod downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:47:34.115: INFO: Pod downwardapi-volume-d47c4aa2-3f70-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:47:34.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n6q6q" for this suite.
Jan 25 12:47:40.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:47:40.388: INFO: namespace: e2e-tests-downward-api-n6q6q, resource: bindings, ignored listing per whitelist
Jan 25 12:47:40.399: INFO: namespace e2e-tests-downward-api-n6q6q deletion completed in 6.274239489s

• [SLOW TEST:18.494 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:47:40.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 12:47:40.930: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 25 12:47:46.221: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 12:47:50.319: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 25 12:47:52.339: INFO: Creating deployment "test-rollover-deployment"
Jan 25 12:47:52.360: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 25 12:47:54.407: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 25 12:47:54.428: INFO: Ensure that both replica sets have 1 created replica
Jan 25 12:47:54.443: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 25 12:47:54.677: INFO: Updating deployment test-rollover-deployment
Jan 25 12:47:54.677: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 25 12:47:56.773: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 25 12:47:56.795: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 25 12:47:57.694: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:47:57.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553276, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:47:59.757: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:47:59.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553276, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:01.930: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:01.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553276, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:03.726: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:03.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553276, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:05.724: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:05.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:07.740: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:07.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:09.731: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:09.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:11.747: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:11.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:13.719: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 12:48:13.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:15.811: INFO: 
Jan 25 12:48:15.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553295, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553272, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:48:17.723: INFO: 
Jan 25 12:48:17.723: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 25 12:48:17.744: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-9h7kt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9h7kt/deployments/test-rollover-deployment,UID:e6733487-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414277,Generation:2,CreationTimestamp:2020-01-25 12:47:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-25 12:47:52 +0000 UTC 2020-01-25 12:47:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-25 12:48:15 +0000 UTC 2020-01-25 12:47:52 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 12:48:17.754: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-9h7kt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9h7kt/replicasets/test-rollover-deployment-5b8479fdb6,UID:e7d9a1dd-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414267,Generation:2,CreationTimestamp:2020-01-25 12:47:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e6733487-3f70-11ea-a994-fa163e34d433 0xc002844cf7 0xc002844cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 25 12:48:17.754: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 25 12:48:17.754: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-9h7kt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9h7kt/replicasets/test-rollover-controller,UID:dfa0ec56-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414276,Generation:2,CreationTimestamp:2020-01-25 12:47:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e6733487-3f70-11ea-a994-fa163e34d433 0xc002844adf 0xc002844af0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 12:48:17.755: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-9h7kt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9h7kt/replicasets/test-rollover-deployment-58494b7559,UID:e67a2153-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414229,Generation:2,CreationTimestamp:2020-01-25 12:47:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e6733487-3f70-11ea-a994-fa163e34d433 0xc002844c17 0xc002844c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 12:48:17.768: INFO: Pod "test-rollover-deployment-5b8479fdb6-7xhhj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-7xhhj,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-9h7kt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9h7kt/pods/test-rollover-deployment-5b8479fdb6-7xhhj,UID:e84c6b2f-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414252,Generation:0,CreationTimestamp:2020-01-25 12:47:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 e7d9a1dd-3f70-11ea-a994-fa163e34d433 0xc0026d2867 0xc0026d2868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-v5vt8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v5vt8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-v5vt8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026d28d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026d28f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:47:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:47:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-25 12:47:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-25 12:48:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8c2e4e86d9bb21ac9732705cc71c4aac9cb035f85a3089238338d8bdd307cfc3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:48:17.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9h7kt" for this suite.
Jan 25 12:48:29.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:48:29.886: INFO: namespace: e2e-tests-deployment-9h7kt, resource: bindings, ignored listing per whitelist
Jan 25 12:48:30.053: INFO: namespace e2e-tests-deployment-9h7kt deletion completed in 12.272829449s

• [SLOW TEST:49.654 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:48:30.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 25 12:48:42.342: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fd0c21e8-3f70-11ea-8a8b-0242ac110006,GenerateName:,Namespace:e2e-tests-events-5x5sl,SelfLink:/api/v1/namespaces/e2e-tests-events-5x5sl/pods/send-events-fd0c21e8-3f70-11ea-8a8b-0242ac110006,UID:fd0e224a-3f70-11ea-a994-fa163e34d433,ResourceVersion:19414350,Generation:0,CreationTimestamp:2020-01-25 12:48:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 253942925,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-prfks {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-prfks,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-prfks true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001de8aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001de8ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:48:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-25 12:48:30 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-25 12:48:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://8fddca5d56c2a3a7a8ad2e25dcded90182f15c02b3eee69bb765fd4d310c016f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 25 12:48:44.364: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 25 12:48:46.384: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:48:46.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-5x5sl" for this suite.
Jan 25 12:49:36.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:49:36.720: INFO: namespace: e2e-tests-events-5x5sl, resource: bindings, ignored listing per whitelist
Jan 25 12:49:36.876: INFO: namespace e2e-tests-events-5x5sl deletion completed in 50.409922412s

• [SLOW TEST:66.823 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:49:36.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-xktjt
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-xktjt
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-xktjt
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-xktjt
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-xktjt
Jan 25 12:49:51.299: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xktjt, name: ss-0, uid: 2a2df258-3f71-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 25 12:49:52.516: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xktjt, name: ss-0, uid: 2a2df258-3f71-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 12:49:52.703: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-xktjt, name: ss-0, uid: 2a2df258-3f71-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 12:49:52.712: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-xktjt
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-xktjt
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-xktjt and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 25 12:50:08.564: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xktjt
Jan 25 12:50:08.593: INFO: Scaling statefulset ss to 0
Jan 25 12:50:18.721: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 12:50:18.734: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:50:18.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-xktjt" for this suite.
Jan 25 12:50:26.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:50:27.147: INFO: namespace: e2e-tests-statefulset-xktjt, resource: bindings, ignored listing per whitelist
Jan 25 12:50:27.229: INFO: namespace e2e-tests-statefulset-xktjt deletion completed in 8.445569959s

• [SLOW TEST:50.352 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:50:27.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wcxdn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 12:50:27.471: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 12:51:04.016: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wcxdn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 12:51:04.016: INFO: >>> kubeConfig: /root/.kube/config
I0125 12:51:04.094229       8 log.go:172] (0xc001b18370) (0xc001dca280) Create stream
I0125 12:51:04.094310       8 log.go:172] (0xc001b18370) (0xc001dca280) Stream added, broadcasting: 1
I0125 12:51:04.099221       8 log.go:172] (0xc001b18370) Reply frame received for 1
I0125 12:51:04.099270       8 log.go:172] (0xc001b18370) (0xc00219e000) Create stream
I0125 12:51:04.099287       8 log.go:172] (0xc001b18370) (0xc00219e000) Stream added, broadcasting: 3
I0125 12:51:04.100647       8 log.go:172] (0xc001b18370) Reply frame received for 3
I0125 12:51:04.100672       8 log.go:172] (0xc001b18370) (0xc001dca320) Create stream
I0125 12:51:04.100684       8 log.go:172] (0xc001b18370) (0xc001dca320) Stream added, broadcasting: 5
I0125 12:51:04.102121       8 log.go:172] (0xc001b18370) Reply frame received for 5
I0125 12:51:05.306329       8 log.go:172] (0xc001b18370) Data frame received for 3
I0125 12:51:05.306486       8 log.go:172] (0xc00219e000) (3) Data frame handling
I0125 12:51:05.306542       8 log.go:172] (0xc00219e000) (3) Data frame sent
I0125 12:51:05.498515       8 log.go:172] (0xc001b18370) Data frame received for 1
I0125 12:51:05.498781       8 log.go:172] (0xc001b18370) (0xc00219e000) Stream removed, broadcasting: 3
I0125 12:51:05.498933       8 log.go:172] (0xc001dca280) (1) Data frame handling
I0125 12:51:05.499037       8 log.go:172] (0xc001dca280) (1) Data frame sent
I0125 12:51:05.499086       8 log.go:172] (0xc001b18370) (0xc001dca280) Stream removed, broadcasting: 1
I0125 12:51:05.499560       8 log.go:172] (0xc001b18370) (0xc001dca320) Stream removed, broadcasting: 5
I0125 12:51:05.499834       8 log.go:172] (0xc001b18370) (0xc001dca280) Stream removed, broadcasting: 1
I0125 12:51:05.499911       8 log.go:172] (0xc001b18370) (0xc00219e000) Stream removed, broadcasting: 3
I0125 12:51:05.499945       8 log.go:172] (0xc001b18370) (0xc001dca320) Stream removed, broadcasting: 5
I0125 12:51:05.499984       8 log.go:172] (0xc001b18370) Go away received
Jan 25 12:51:05.500: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:51:05.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wcxdn" for this suite.
Jan 25 12:51:29.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:51:29.638: INFO: namespace: e2e-tests-pod-network-test-wcxdn, resource: bindings, ignored listing per whitelist
Jan 25 12:51:29.762: INFO: namespace e2e-tests-pod-network-test-wcxdn deletion completed in 24.234195487s

• [SLOW TEST:62.532 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:51:29.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bvdwv
Jan 25 12:51:42.041: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bvdwv
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 12:51:42.054: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:55:42.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bvdwv" for this suite.
Jan 25 12:55:50.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:55:51.005: INFO: namespace: e2e-tests-container-probe-bvdwv, resource: bindings, ignored listing per whitelist
Jan 25 12:55:51.048: INFO: namespace e2e-tests-container-probe-bvdwv deletion completed in 8.299105421s

• [SLOW TEST:261.286 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:55:51.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:55:58.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-m4rpg" for this suite.
Jan 25 12:56:04.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:56:04.191: INFO: namespace: e2e-tests-namespaces-m4rpg, resource: bindings, ignored listing per whitelist
Jan 25 12:56:04.273: INFO: namespace e2e-tests-namespaces-m4rpg deletion completed in 6.206957389s
STEP: Destroying namespace "e2e-tests-nsdeletetest-kkz4c" for this suite.
Jan 25 12:56:04.277: INFO: Namespace e2e-tests-nsdeletetest-kkz4c was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-m4mcd" for this suite.
Jan 25 12:56:10.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:56:10.477: INFO: namespace: e2e-tests-nsdeletetest-m4mcd, resource: bindings, ignored listing per whitelist
Jan 25 12:56:10.478: INFO: namespace e2e-tests-nsdeletetest-m4mcd deletion completed in 6.200672378s

• [SLOW TEST:19.428 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:56:10.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7xpr5
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7xpr5
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7xpr5
Jan 25 12:56:11.399: INFO: Found 0 stateful pods, waiting for 1
Jan 25 12:56:22.003: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 12:56:31.463: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 25 12:56:31.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 12:56:32.352: INFO: stderr: "I0125 12:56:31.744718    3436 log.go:172] (0xc00073a370) (0xc000762640) Create stream\nI0125 12:56:31.745587    3436 log.go:172] (0xc00073a370) (0xc000762640) Stream added, broadcasting: 1\nI0125 12:56:31.752347    3436 log.go:172] (0xc00073a370) Reply frame received for 1\nI0125 12:56:31.752415    3436 log.go:172] (0xc00073a370) (0xc0005bec80) Create stream\nI0125 12:56:31.752444    3436 log.go:172] (0xc00073a370) (0xc0005bec80) Stream added, broadcasting: 3\nI0125 12:56:31.754030    3436 log.go:172] (0xc00073a370) Reply frame received for 3\nI0125 12:56:31.754081    3436 log.go:172] (0xc00073a370) (0xc000672000) Create stream\nI0125 12:56:31.754109    3436 log.go:172] (0xc00073a370) (0xc000672000) Stream added, broadcasting: 5\nI0125 12:56:31.755931    3436 log.go:172] (0xc00073a370) Reply frame received for 5\nI0125 12:56:32.107805    3436 log.go:172] (0xc00073a370) Data frame received for 3\nI0125 12:56:32.107875    3436 log.go:172] (0xc0005bec80) (3) Data frame handling\nI0125 12:56:32.107887    3436 log.go:172] (0xc0005bec80) (3) Data frame sent\nI0125 12:56:32.341186    3436 log.go:172] (0xc00073a370) (0xc0005bec80) Stream removed, broadcasting: 3\nI0125 12:56:32.341488    3436 log.go:172] (0xc00073a370) Data frame received for 1\nI0125 12:56:32.341544    3436 log.go:172] (0xc000762640) (1) Data frame handling\nI0125 12:56:32.341564    3436 log.go:172] (0xc000762640) (1) Data frame sent\nI0125 12:56:32.341582    3436 log.go:172] (0xc00073a370) (0xc000762640) Stream removed, broadcasting: 1\nI0125 12:56:32.341592    3436 log.go:172] (0xc00073a370) (0xc000672000) Stream removed, broadcasting: 5\nI0125 12:56:32.342118    3436 log.go:172] (0xc00073a370) Go away received\nI0125 12:56:32.342187    3436 log.go:172] (0xc00073a370) (0xc000762640) Stream removed, broadcasting: 1\nI0125 12:56:32.342197    3436 log.go:172] (0xc00073a370) (0xc0005bec80) Stream removed, broadcasting: 3\nI0125 12:56:32.342203    3436 log.go:172] (0xc00073a370) (0xc000672000) Stream removed, broadcasting: 5\n"
Jan 25 12:56:32.353: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 12:56:32.353: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 12:56:32.371: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 12:56:42.410: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 12:56:42.410: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 12:56:42.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998316s
Jan 25 12:56:43.510: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.964046546s
Jan 25 12:56:44.539: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.937125393s
Jan 25 12:56:45.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.907809857s
Jan 25 12:56:46.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.889095576s
Jan 25 12:56:47.680: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.8489803s
Jan 25 12:56:48.715: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.767454165s
Jan 25 12:56:49.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.73260947s
Jan 25 12:56:50.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.715055919s
Jan 25 12:56:51.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 697.397752ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7xpr5
Jan 25 12:56:52.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 12:56:53.389: INFO: stderr: "I0125 12:56:53.038932    3459 log.go:172] (0xc0001c84d0) (0xc0007d8640) Create stream\nI0125 12:56:53.039475    3459 log.go:172] (0xc0001c84d0) (0xc0007d8640) Stream added, broadcasting: 1\nI0125 12:56:53.048943    3459 log.go:172] (0xc0001c84d0) Reply frame received for 1\nI0125 12:56:53.049104    3459 log.go:172] (0xc0001c84d0) (0xc0007ac000) Create stream\nI0125 12:56:53.049177    3459 log.go:172] (0xc0001c84d0) (0xc0007ac000) Stream added, broadcasting: 3\nI0125 12:56:53.051316    3459 log.go:172] (0xc0001c84d0) Reply frame received for 3\nI0125 12:56:53.051373    3459 log.go:172] (0xc0001c84d0) (0xc0007d86e0) Create stream\nI0125 12:56:53.051418    3459 log.go:172] (0xc0001c84d0) (0xc0007d86e0) Stream added, broadcasting: 5\nI0125 12:56:53.053738    3459 log.go:172] (0xc0001c84d0) Reply frame received for 5\nI0125 12:56:53.204831    3459 log.go:172] (0xc0001c84d0) Data frame received for 3\nI0125 12:56:53.205304    3459 log.go:172] (0xc0007ac000) (3) Data frame handling\nI0125 12:56:53.205406    3459 log.go:172] (0xc0007ac000) (3) Data frame sent\nI0125 12:56:53.374160    3459 log.go:172] (0xc0001c84d0) Data frame received for 1\nI0125 12:56:53.374283    3459 log.go:172] (0xc0007d8640) (1) Data frame handling\nI0125 12:56:53.374337    3459 log.go:172] (0xc0007d8640) (1) Data frame sent\nI0125 12:56:53.374475    3459 log.go:172] (0xc0001c84d0) (0xc0007d8640) Stream removed, broadcasting: 1\nI0125 12:56:53.375335    3459 log.go:172] (0xc0001c84d0) (0xc0007d86e0) Stream removed, broadcasting: 5\nI0125 12:56:53.375614    3459 log.go:172] (0xc0001c84d0) (0xc0007ac000) Stream removed, broadcasting: 3\nI0125 12:56:53.375694    3459 log.go:172] (0xc0001c84d0) Go away received\nI0125 12:56:53.376131    3459 log.go:172] (0xc0001c84d0) (0xc0007d8640) Stream removed, broadcasting: 1\nI0125 12:56:53.376199    3459 log.go:172] (0xc0001c84d0) (0xc0007ac000) Stream removed, broadcasting: 3\nI0125 12:56:53.376225    3459 log.go:172] (0xc0001c84d0) (0xc0007d86e0) Stream removed, broadcasting: 5\n"
Jan 25 12:56:53.389: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 12:56:53.389: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 12:56:53.474: INFO: Found 1 stateful pods, waiting for 3
Jan 25 12:57:03.519: INFO: Found 2 stateful pods, waiting for 3
Jan 25 12:57:13.502: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 12:57:13.502: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 12:57:13.502: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 12:57:23.502: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 12:57:23.503: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 12:57:23.503: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 25 12:57:23.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 12:57:24.546: INFO: stderr: "I0125 12:57:23.834406    3481 log.go:172] (0xc0007b8420) (0xc00072a640) Create stream\nI0125 12:57:23.835445    3481 log.go:172] (0xc0007b8420) (0xc00072a640) Stream added, broadcasting: 1\nI0125 12:57:23.852630    3481 log.go:172] (0xc0007b8420) Reply frame received for 1\nI0125 12:57:23.852741    3481 log.go:172] (0xc0007b8420) (0xc0005bcc80) Create stream\nI0125 12:57:23.852749    3481 log.go:172] (0xc0007b8420) (0xc0005bcc80) Stream added, broadcasting: 3\nI0125 12:57:23.854659    3481 log.go:172] (0xc0007b8420) Reply frame received for 3\nI0125 12:57:23.854803    3481 log.go:172] (0xc0007b8420) (0xc0003ca000) Create stream\nI0125 12:57:23.854869    3481 log.go:172] (0xc0007b8420) (0xc0003ca000) Stream added, broadcasting: 5\nI0125 12:57:23.859933    3481 log.go:172] (0xc0007b8420) Reply frame received for 5\nI0125 12:57:24.259255    3481 log.go:172] (0xc0007b8420) Data frame received for 3\nI0125 12:57:24.259486    3481 log.go:172] (0xc0005bcc80) (3) Data frame handling\nI0125 12:57:24.259513    3481 log.go:172] (0xc0005bcc80) (3) Data frame sent\nI0125 12:57:24.531627    3481 log.go:172] (0xc0007b8420) Data frame received for 1\nI0125 12:57:24.532259    3481 log.go:172] (0xc00072a640) (1) Data frame handling\nI0125 12:57:24.532308    3481 log.go:172] (0xc00072a640) (1) Data frame sent\nI0125 12:57:24.532327    3481 log.go:172] (0xc0007b8420) (0xc00072a640) Stream removed, broadcasting: 1\nI0125 12:57:24.532790    3481 log.go:172] (0xc0007b8420) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0125 12:57:24.532976    3481 log.go:172] (0xc0007b8420) (0xc0003ca000) Stream removed, broadcasting: 5\nI0125 12:57:24.533089    3481 log.go:172] (0xc0007b8420) Go away received\nI0125 12:57:24.533263    3481 log.go:172] (0xc0007b8420) (0xc00072a640) Stream removed, broadcasting: 1\nI0125 12:57:24.533277    3481 log.go:172] (0xc0007b8420) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0125 12:57:24.533284    3481 log.go:172] (0xc0007b8420) (0xc0003ca000) Stream removed, broadcasting: 5\n"
Jan 25 12:57:24.546: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 12:57:24.546: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 12:57:24.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 12:57:25.206: INFO: stderr: "I0125 12:57:24.879515    3503 log.go:172] (0xc0008942c0) (0xc000740640) Create stream\nI0125 12:57:24.880037    3503 log.go:172] (0xc0008942c0) (0xc000740640) Stream added, broadcasting: 1\nI0125 12:57:24.884436    3503 log.go:172] (0xc0008942c0) Reply frame received for 1\nI0125 12:57:24.884469    3503 log.go:172] (0xc0008942c0) (0xc000672c80) Create stream\nI0125 12:57:24.884478    3503 log.go:172] (0xc0008942c0) (0xc000672c80) Stream added, broadcasting: 3\nI0125 12:57:24.885353    3503 log.go:172] (0xc0008942c0) Reply frame received for 3\nI0125 12:57:24.885369    3503 log.go:172] (0xc0008942c0) (0xc000672dc0) Create stream\nI0125 12:57:24.885376    3503 log.go:172] (0xc0008942c0) (0xc000672dc0) Stream added, broadcasting: 5\nI0125 12:57:24.886111    3503 log.go:172] (0xc0008942c0) Reply frame received for 5\nI0125 12:57:25.065732    3503 log.go:172] (0xc0008942c0) Data frame received for 3\nI0125 12:57:25.065936    3503 log.go:172] (0xc000672c80) (3) Data frame handling\nI0125 12:57:25.065981    3503 log.go:172] (0xc000672c80) (3) Data frame sent\nI0125 12:57:25.193601    3503 log.go:172] (0xc0008942c0) (0xc000672c80) Stream removed, broadcasting: 3\nI0125 12:57:25.194113    3503 log.go:172] (0xc0008942c0) Data frame received for 1\nI0125 12:57:25.194307    3503 log.go:172] (0xc0008942c0) (0xc000672dc0) Stream removed, broadcasting: 5\nI0125 12:57:25.194402    3503 log.go:172] (0xc000740640) (1) Data frame handling\nI0125 12:57:25.194476    3503 log.go:172] (0xc000740640) (1) Data frame sent\nI0125 12:57:25.194622    3503 log.go:172] (0xc0008942c0) (0xc000740640) Stream removed, broadcasting: 1\nI0125 12:57:25.194688    3503 log.go:172] (0xc0008942c0) Go away received\nI0125 12:57:25.195388    3503 log.go:172] (0xc0008942c0) (0xc000740640) Stream removed, broadcasting: 1\nI0125 12:57:25.195415    3503 log.go:172] (0xc0008942c0) (0xc000672c80) Stream removed, broadcasting: 3\nI0125 12:57:25.195421    3503 log.go:172] (0xc0008942c0) (0xc000672dc0) Stream removed, broadcasting: 5\n"
Jan 25 12:57:25.206: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 12:57:25.206: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 12:57:25.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 25 12:57:25.838: INFO: stderr: "I0125 12:57:25.478962    3525 log.go:172] (0xc0007da370) (0xc000714640) Create stream\nI0125 12:57:25.479112    3525 log.go:172] (0xc0007da370) (0xc000714640) Stream added, broadcasting: 1\nI0125 12:57:25.483239    3525 log.go:172] (0xc0007da370) Reply frame received for 1\nI0125 12:57:25.483303    3525 log.go:172] (0xc0007da370) (0xc000596be0) Create stream\nI0125 12:57:25.483317    3525 log.go:172] (0xc0007da370) (0xc000596be0) Stream added, broadcasting: 3\nI0125 12:57:25.484365    3525 log.go:172] (0xc0007da370) Reply frame received for 3\nI0125 12:57:25.484382    3525 log.go:172] (0xc0007da370) (0xc0007146e0) Create stream\nI0125 12:57:25.484388    3525 log.go:172] (0xc0007da370) (0xc0007146e0) Stream added, broadcasting: 5\nI0125 12:57:25.485446    3525 log.go:172] (0xc0007da370) Reply frame received for 5\nI0125 12:57:25.623518    3525 log.go:172] (0xc0007da370) Data frame received for 3\nI0125 12:57:25.623555    3525 log.go:172] (0xc000596be0) (3) Data frame handling\nI0125 12:57:25.623568    3525 log.go:172] (0xc000596be0) (3) Data frame sent\nI0125 12:57:25.825075    3525 log.go:172] (0xc0007da370) Data frame received for 1\nI0125 12:57:25.825175    3525 log.go:172] (0xc0007da370) (0xc0007146e0) Stream removed, broadcasting: 5\nI0125 12:57:25.825217    3525 log.go:172] (0xc000714640) (1) Data frame handling\nI0125 12:57:25.825226    3525 log.go:172] (0xc000714640) (1) Data frame sent\nI0125 12:57:25.825250    3525 log.go:172] (0xc0007da370) (0xc000714640) Stream removed, broadcasting: 1\nI0125 12:57:25.825401    3525 log.go:172] (0xc0007da370) (0xc000596be0) Stream removed, broadcasting: 3\nI0125 12:57:25.825551    3525 log.go:172] (0xc0007da370) Go away received\nI0125 12:57:25.825613    3525 log.go:172] (0xc0007da370) (0xc000714640) Stream removed, broadcasting: 1\nI0125 12:57:25.825621    3525 log.go:172] (0xc0007da370) (0xc000596be0) Stream removed, broadcasting: 3\nI0125 12:57:25.825624    3525 log.go:172] (0xc0007da370) (0xc0007146e0) Stream removed, broadcasting: 5\n"
Jan 25 12:57:25.838: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 25 12:57:25.838: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 25 12:57:25.838: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 12:57:25.873: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 12:57:25.873: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 12:57:25.873: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 12:57:25.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998943s
Jan 25 12:57:26.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990874542s
Jan 25 12:57:27.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958188036s
Jan 25 12:57:28.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.923532413s
Jan 25 12:57:30.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.912604795s
Jan 25 12:57:31.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.881127322s
Jan 25 12:57:32.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.859690967s
Jan 25 12:57:33.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.83462731s
Jan 25 12:57:34.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.772983409s
Jan 25 12:57:35.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 736.313976ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7xpr5
Jan 25 12:57:36.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 12:57:37.106: INFO: stderr: "I0125 12:57:36.621152    3548 log.go:172] (0xc000714370) (0xc000736640) Create stream\nI0125 12:57:36.621420    3548 log.go:172] (0xc000714370) (0xc000736640) Stream added, broadcasting: 1\nI0125 12:57:36.639536    3548 log.go:172] (0xc000714370) Reply frame received for 1\nI0125 12:57:36.639595    3548 log.go:172] (0xc000714370) (0xc0005c2d20) Create stream\nI0125 12:57:36.639609    3548 log.go:172] (0xc000714370) (0xc0005c2d20) Stream added, broadcasting: 3\nI0125 12:57:36.640986    3548 log.go:172] (0xc000714370) Reply frame received for 3\nI0125 12:57:36.641008    3548 log.go:172] (0xc000714370) (0xc000500000) Create stream\nI0125 12:57:36.641018    3548 log.go:172] (0xc000714370) (0xc000500000) Stream added, broadcasting: 5\nI0125 12:57:36.642438    3548 log.go:172] (0xc000714370) Reply frame received for 5\nI0125 12:57:36.789167    3548 log.go:172] (0xc000714370) Data frame received for 3\nI0125 12:57:36.789273    3548 log.go:172] (0xc0005c2d20) (3) Data frame handling\nI0125 12:57:36.789304    3548 log.go:172] (0xc0005c2d20) (3) Data frame sent\nI0125 12:57:37.093185    3548 log.go:172] (0xc000714370) Data frame received for 1\nI0125 12:57:37.093270    3548 log.go:172] (0xc000736640) (1) Data frame handling\nI0125 12:57:37.093292    3548 log.go:172] (0xc000736640) (1) Data frame sent\nI0125 12:57:37.093463    3548 log.go:172] (0xc000714370) (0xc000500000) Stream removed, broadcasting: 5\nI0125 12:57:37.093630    3548 log.go:172] (0xc000714370) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0125 12:57:37.093771    3548 log.go:172] (0xc000714370) (0xc000736640) Stream removed, broadcasting: 1\nI0125 12:57:37.093858    3548 log.go:172] (0xc000714370) Go away received\nI0125 12:57:37.094330    3548 log.go:172] (0xc000714370) (0xc000736640) Stream removed, broadcasting: 1\nI0125 12:57:37.094350    3548 log.go:172] (0xc000714370) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0125 12:57:37.094357    3548 log.go:172] (0xc000714370) (0xc000500000) Stream removed, broadcasting: 5\n"
Jan 25 12:57:37.107: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 12:57:37.107: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 12:57:37.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 12:57:38.077: INFO: stderr: "I0125 12:57:37.432714    3570 log.go:172] (0xc000736370) (0xc0007ac640) Create stream\nI0125 12:57:37.432875    3570 log.go:172] (0xc000736370) (0xc0007ac640) Stream added, broadcasting: 1\nI0125 12:57:37.440546    3570 log.go:172] (0xc000736370) Reply frame received for 1\nI0125 12:57:37.440596    3570 log.go:172] (0xc000736370) (0xc00067af00) Create stream\nI0125 12:57:37.440631    3570 log.go:172] (0xc000736370) (0xc00067af00) Stream added, broadcasting: 3\nI0125 12:57:37.443104    3570 log.go:172] (0xc000736370) Reply frame received for 3\nI0125 12:57:37.443152    3570 log.go:172] (0xc000736370) (0xc000708000) Create stream\nI0125 12:57:37.443177    3570 log.go:172] (0xc000736370) (0xc000708000) Stream added, broadcasting: 5\nI0125 12:57:37.445571    3570 log.go:172] (0xc000736370) Reply frame received for 5\nI0125 12:57:37.687360    3570 log.go:172] (0xc000736370) Data frame received for 3\nI0125 12:57:37.687466    3570 log.go:172] (0xc00067af00) (3) Data frame handling\nI0125 12:57:37.687497    3570 log.go:172] (0xc00067af00) (3) Data frame sent\nI0125 12:57:38.064733    3570 log.go:172] (0xc000736370) Data frame received for 1\nI0125 12:57:38.064806    3570 log.go:172] (0xc000736370) (0xc000708000) Stream removed, broadcasting: 5\nI0125 12:57:38.064849    3570 log.go:172] (0xc0007ac640) (1) Data frame handling\nI0125 12:57:38.064857    3570 log.go:172] (0xc0007ac640) (1) Data frame sent\nI0125 12:57:38.065042    3570 log.go:172] (0xc000736370) (0xc00067af00) Stream removed, broadcasting: 3\nI0125 12:57:38.065092    3570 log.go:172] (0xc000736370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0125 12:57:38.065119    3570 log.go:172] (0xc000736370) Go away received\nI0125 12:57:38.065756    3570 log.go:172] (0xc000736370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0125 12:57:38.065767    3570 log.go:172] (0xc000736370) (0xc00067af00) Stream removed, broadcasting: 3\nI0125 12:57:38.065773    3570 log.go:172] (0xc000736370) (0xc000708000) Stream removed, broadcasting: 5\n"
Jan 25 12:57:38.078: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 12:57:38.078: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 12:57:38.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7xpr5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 25 12:57:38.694: INFO: stderr: "I0125 12:57:38.278305    3593 log.go:172] (0xc000158840) (0xc0000174a0) Create stream\nI0125 12:57:38.278430    3593 log.go:172] (0xc000158840) (0xc0000174a0) Stream added, broadcasting: 1\nI0125 12:57:38.304533    3593 log.go:172] (0xc000158840) Reply frame received for 1\nI0125 12:57:38.304652    3593 log.go:172] (0xc000158840) (0xc000508000) Create stream\nI0125 12:57:38.304694    3593 log.go:172] (0xc000158840) (0xc000508000) Stream added, broadcasting: 3\nI0125 12:57:38.310374    3593 log.go:172] (0xc000158840) Reply frame received for 3\nI0125 12:57:38.310423    3593 log.go:172] (0xc000158840) (0xc0005080a0) Create stream\nI0125 12:57:38.310433    3593 log.go:172] (0xc000158840) (0xc0005080a0) Stream added, broadcasting: 5\nI0125 12:57:38.315981    3593 log.go:172] (0xc000158840) Reply frame received for 5\nI0125 12:57:38.409781    3593 log.go:172] (0xc000158840) Data frame received for 3\nI0125 12:57:38.409882    3593 log.go:172] (0xc000508000) (3) Data frame handling\nI0125 12:57:38.409910    3593 log.go:172] (0xc000508000) (3) Data frame sent\nI0125 12:57:38.681997    3593 log.go:172] (0xc000158840) (0xc0005080a0) Stream removed, broadcasting: 5\nI0125 12:57:38.682453    3593 log.go:172] (0xc000158840) (0xc000508000) Stream removed, broadcasting: 3\nI0125 12:57:38.682646    3593 log.go:172] (0xc000158840) Data frame received for 1\nI0125 12:57:38.682757    3593 log.go:172] (0xc0000174a0) (1) Data frame handling\nI0125 12:57:38.682806    3593 log.go:172] (0xc0000174a0) (1) Data frame sent\nI0125 12:57:38.682963    3593 log.go:172] (0xc000158840) (0xc0000174a0) Stream removed, broadcasting: 1\nI0125 12:57:38.683013    3593 log.go:172] (0xc000158840) Go away received\nI0125 12:57:38.683425    3593 log.go:172] (0xc000158840) (0xc0000174a0) Stream removed, broadcasting: 1\nI0125 12:57:38.683446    3593 log.go:172] (0xc000158840) (0xc000508000) Stream removed, broadcasting: 3\nI0125 12:57:38.683455    3593 log.go:172] (0xc000158840) (0xc0005080a0) Stream removed, broadcasting: 5\n"
Jan 25 12:57:38.695: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 25 12:57:38.695: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 25 12:57:38.695: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 25 12:58:09.966: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7xpr5
Jan 25 12:58:09.981: INFO: Scaling statefulset ss to 0
Jan 25 12:58:10.260: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 12:58:10.285: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:58:10.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7xpr5" for this suite.
Jan 25 12:58:18.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:58:18.898: INFO: namespace: e2e-tests-statefulset-7xpr5, resource: bindings, ignored listing per whitelist
Jan 25 12:58:18.944: INFO: namespace e2e-tests-statefulset-7xpr5 deletion completed in 8.614956711s

• [SLOW TEST:128.464 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:58:18.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 25 12:58:19.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 25 12:58:19.515: INFO: stderr: ""
Jan 25 12:58:19.515: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:58:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ckxx8" for this suite.
Jan 25 12:58:25.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:58:25.877: INFO: namespace: e2e-tests-kubectl-ckxx8, resource: bindings, ignored listing per whitelist
Jan 25 12:58:25.886: INFO: namespace e2e-tests-kubectl-ckxx8 deletion completed in 6.352193991s

• [SLOW TEST:6.942 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:58:25.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 12:58:26.168: INFO: Waiting up to 5m0s for pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-gdnx9" to be "success or failure"
Jan 25 12:58:26.190: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 21.703636ms
Jan 25 12:58:28.650: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48217495s
Jan 25 12:58:30.668: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499626934s
Jan 25 12:58:34.048: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.879969092s
Jan 25 12:58:36.089: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.920591647s
Jan 25 12:58:38.172: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 12.004037734s
Jan 25 12:58:40.228: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.059701824s
STEP: Saw pod success
Jan 25 12:58:40.228: INFO: Pod "pod-603b54e4-3f72-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 12:58:40.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-603b54e4-3f72-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 12:58:42.761: INFO: Waiting for pod pod-603b54e4-3f72-11ea-8a8b-0242ac110006 to disappear
Jan 25 12:58:42.785: INFO: Pod pod-603b54e4-3f72-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:58:42.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gdnx9" for this suite.
Jan 25 12:58:49.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:58:49.277: INFO: namespace: e2e-tests-emptydir-gdnx9, resource: bindings, ignored listing per whitelist
Jan 25 12:58:49.306: INFO: namespace e2e-tests-emptydir-gdnx9 deletion completed in 6.493930775s

• [SLOW TEST:23.420 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:58:49.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 12:58:49.469: INFO: Creating deployment "test-recreate-deployment"
Jan 25 12:58:49.480: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 25 12:58:49.496: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 25 12:58:51.543: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 25 12:58:51.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:58:53.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:58:56.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:58:59.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:59:00.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:59:01.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715553929, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 12:59:03.781: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 25 12:59:03.804: INFO: Updating deployment test-recreate-deployment
Jan 25 12:59:03.804: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 25 12:59:06.533: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-7pnms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7pnms/deployments/test-recreate-deployment,UID:6e2150a1-3f72-11ea-a994-fa163e34d433,ResourceVersion:19415598,Generation:2,CreationTimestamp:2020-01-25 12:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-25 12:59:04 +0000 UTC 2020-01-25 12:59:04 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-25 12:59:05 +0000 UTC 2020-01-25 12:58:49 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 25 12:59:06.557: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-7pnms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7pnms/replicasets/test-recreate-deployment-589c4bfd,UID:770ffc5d-3f72-11ea-a994-fa163e34d433,ResourceVersion:19415596,Generation:1,CreationTimestamp:2020-01-25 12:59:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6e2150a1-3f72-11ea-a994-fa163e34d433 0xc00269606f 0xc002696080}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 12:59:06.557: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 25 12:59:06.558: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-7pnms,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7pnms/replicasets/test-recreate-deployment-5bf7f65dc,UID:6e24b702-3f72-11ea-a994-fa163e34d433,ResourceVersion:19415587,Generation:2,CreationTimestamp:2020-01-25 12:58:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6e2150a1-3f72-11ea-a994-fa163e34d433 0xc002696140 0xc002696141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 12:59:06.570: INFO: Pod "test-recreate-deployment-589c4bfd-9l9rw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-9l9rw,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-7pnms,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7pnms/pods/test-recreate-deployment-589c4bfd-9l9rw,UID:7712e0de-3f72-11ea-a994-fa163e34d433,ResourceVersion:19415600,Generation:0,CreationTimestamp:2020-01-25 12:59:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 770ffc5d-3f72-11ea-a994-fa163e34d433 0xc00240419f 0xc0024041b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gkhhc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gkhhc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gkhhc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002404210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002404230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:59:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:59:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 12:59:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 12:59:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:59:06.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7pnms" for this suite.
Jan 25 12:59:16.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 12:59:16.125: INFO: namespace: e2e-tests-deployment-7pnms, resource: bindings, ignored listing per whitelist
Jan 25 12:59:16.201: INFO: namespace e2e-tests-deployment-7pnms deletion completed in 9.623348707s

• [SLOW TEST:26.895 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 12:59:16.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 25 12:59:16.560: INFO: namespace e2e-tests-kubectl-jk5wm
Jan 25 12:59:16.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jk5wm'
Jan 25 12:59:19.459: INFO: stderr: ""
Jan 25 12:59:19.459: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 25 12:59:20.489: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:20.490: INFO: Found 0 / 1
Jan 25 12:59:21.480: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:21.480: INFO: Found 0 / 1
Jan 25 12:59:22.478: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:22.478: INFO: Found 0 / 1
Jan 25 12:59:23.492: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:23.492: INFO: Found 0 / 1
Jan 25 12:59:24.496: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:24.496: INFO: Found 0 / 1
Jan 25 12:59:26.061: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:26.062: INFO: Found 0 / 1
Jan 25 12:59:26.701: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:26.701: INFO: Found 0 / 1
Jan 25 12:59:27.552: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:27.552: INFO: Found 0 / 1
Jan 25 12:59:28.838: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:28.838: INFO: Found 0 / 1
Jan 25 12:59:29.487: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:29.488: INFO: Found 0 / 1
Jan 25 12:59:30.483: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:30.483: INFO: Found 0 / 1
Jan 25 12:59:31.488: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:31.488: INFO: Found 0 / 1
Jan 25 12:59:32.568: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:32.568: INFO: Found 1 / 1
Jan 25 12:59:32.569: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 12:59:32.691: INFO: Selector matched 1 pods for map[app:redis]
Jan 25 12:59:32.692: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 12:59:32.692: INFO: wait on redis-master startup in e2e-tests-kubectl-jk5wm 
Jan 25 12:59:32.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fsv95 redis-master --namespace=e2e-tests-kubectl-jk5wm'
Jan 25 12:59:32.936: INFO: stderr: ""
Jan 25 12:59:32.936: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Jan 12:59:30.372 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Jan 12:59:30.373 # Server started, Redis version 3.2.12\n1:M 25 Jan 12:59:30.374 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Jan 12:59:30.374 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 25 12:59:32.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-jk5wm'
Jan 25 12:59:33.109: INFO: stderr: ""
Jan 25 12:59:33.109: INFO: stdout: "service/rm2 exposed\n"
Jan 25 12:59:33.116: INFO: Service rm2 in namespace e2e-tests-kubectl-jk5wm found.
STEP: exposing service
Jan 25 12:59:35.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-jk5wm'
Jan 25 12:59:35.338: INFO: stderr: ""
Jan 25 12:59:35.338: INFO: stdout: "service/rm3 exposed\n"
Jan 25 12:59:35.363: INFO: Service rm3 in namespace e2e-tests-kubectl-jk5wm found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 12:59:37.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jk5wm" for this suite.
Jan 25 13:00:05.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:00:05.619: INFO: namespace: e2e-tests-kubectl-jk5wm, resource: bindings, ignored listing per whitelist
Jan 25 13:00:05.712: INFO: namespace e2e-tests-kubectl-jk5wm deletion completed in 28.230434643s

• [SLOW TEST:49.510 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:00:05.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:00:06.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-n7qkz" to be "success or failure"
Jan 25 13:00:06.275: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 206.331127ms
Jan 25 13:00:08.304: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234939256s
Jan 25 13:00:10.540: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471129172s
Jan 25 13:00:12.774: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.705808446s
Jan 25 13:00:15.201: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.132777171s
Jan 25 13:00:17.219: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.150326266s
Jan 25 13:00:19.236: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.16717059s
STEP: Saw pod success
Jan 25 13:00:19.236: INFO: Pod "downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:00:19.248: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:00:19.493: INFO: Waiting for pod downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:00:19.515: INFO: Pod downwardapi-volume-9bc66a06-3f72-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:00:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n7qkz" for this suite.
Jan 25 13:00:25.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:00:25.716: INFO: namespace: e2e-tests-projected-n7qkz, resource: bindings, ignored listing per whitelist
Jan 25 13:00:25.852: INFO: namespace e2e-tests-projected-n7qkz deletion completed in 6.331162685s

• [SLOW TEST:20.140 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:00:25.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:00:26.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-blj2v" to be "success or failure"
Jan 25 13:00:26.083: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.804896ms
Jan 25 13:00:28.103: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027525113s
Jan 25 13:00:30.121: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045832488s
Jan 25 13:00:32.161: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085180323s
Jan 25 13:00:35.048: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.972303836s
Jan 25 13:00:37.070: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.99424973s
Jan 25 13:00:39.361: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.286080881s
Jan 25 13:00:42.526: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.450249809s
STEP: Saw pod success
Jan 25 13:00:42.526: INFO: Pod "downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:00:42.560: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:00:43.138: INFO: Waiting for pod downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:00:43.243: INFO: Pod downwardapi-volume-a7b318a9-3f72-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:00:43.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-blj2v" for this suite.
Jan 25 13:00:51.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:00:51.624: INFO: namespace: e2e-tests-downward-api-blj2v, resource: bindings, ignored listing per whitelist
Jan 25 13:00:51.634: INFO: namespace e2e-tests-downward-api-blj2v deletion completed in 8.372081001s

• [SLOW TEST:25.780 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:00:51.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:00:51.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-xmcr4" to be "success or failure"
Jan 25 13:00:51.972: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 141.331209ms
Jan 25 13:00:54.285: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.455224523s
Jan 25 13:00:56.297: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467272046s
Jan 25 13:00:58.335: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505065807s
Jan 25 13:01:00.891: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060666661s
Jan 25 13:01:02.912: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.082137937s
Jan 25 13:01:04.975: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.144454619s
STEP: Saw pod success
Jan 25 13:01:04.975: INFO: Pod "downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:01:05.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:01:05.217: INFO: Waiting for pod downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:01:05.270: INFO: Pod downwardapi-volume-b70d2837-3f72-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:01:05.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xmcr4" for this suite.
Jan 25 13:01:11.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:01:11.359: INFO: namespace: e2e-tests-projected-xmcr4, resource: bindings, ignored listing per whitelist
Jan 25 13:01:11.426: INFO: namespace e2e-tests-projected-xmcr4 deletion completed in 6.140603923s

• [SLOW TEST:19.792 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:01:11.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-rqpq
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 13:01:11.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rqpq" in namespace "e2e-tests-subpath-nqklj" to be "success or failure"
Jan 25 13:01:11.837: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 138.391222ms
Jan 25 13:01:14.363: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.663979671s
Jan 25 13:01:16.388: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689254424s
Jan 25 13:01:18.929: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230323207s
Jan 25 13:01:20.944: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.245901231s
Jan 25 13:01:22.958: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.259881748s
Jan 25 13:01:24.976: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.277037266s
Jan 25 13:01:26.999: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.299992236s
Jan 25 13:01:29.047: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.348472466s
Jan 25 13:01:31.066: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 19.367861493s
Jan 25 13:01:33.140: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 21.441387123s
Jan 25 13:01:36.047: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.348281226s
Jan 25 13:01:38.066: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 26.367344538s
Jan 25 13:01:40.083: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 28.384220368s
Jan 25 13:01:42.096: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 30.397420687s
Jan 25 13:01:44.107: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 32.40812172s
Jan 25 13:01:46.139: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 34.440886315s
Jan 25 13:01:48.168: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 36.469131193s
Jan 25 13:01:50.195: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Running", Reason="", readiness=false. Elapsed: 38.496259049s
Jan 25 13:01:53.631: INFO: Pod "pod-subpath-test-secret-rqpq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.932693715s
STEP: Saw pod success
Jan 25 13:01:53.632: INFO: Pod "pod-subpath-test-secret-rqpq" satisfied condition "success or failure"
Jan 25 13:01:54.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-rqpq container test-container-subpath-secret-rqpq: 
STEP: delete the pod
Jan 25 13:01:55.195: INFO: Waiting for pod pod-subpath-test-secret-rqpq to disappear
Jan 25 13:01:55.206: INFO: Pod pod-subpath-test-secret-rqpq no longer exists
STEP: Deleting pod pod-subpath-test-secret-rqpq
Jan 25 13:01:55.206: INFO: Deleting pod "pod-subpath-test-secret-rqpq" in namespace "e2e-tests-subpath-nqklj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:01:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-nqklj" for this suite.
Jan 25 13:02:03.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:02:03.479: INFO: namespace: e2e-tests-subpath-nqklj, resource: bindings, ignored listing per whitelist
Jan 25 13:02:03.623: INFO: namespace e2e-tests-subpath-nqklj deletion completed in 8.371769148s

• [SLOW TEST:52.197 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:02:03.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e21e1eac-3f72-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 13:02:04.146: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-5h5dv" to be "success or failure"
Jan 25 13:02:04.325: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 178.962791ms
Jan 25 13:02:06.349: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20302334s
Jan 25 13:02:08.377: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230719902s
Jan 25 13:02:10.411: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264272453s
Jan 25 13:02:12.884: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737416661s
Jan 25 13:02:14.934: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.788032384s
Jan 25 13:02:17.102: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.956049454s
STEP: Saw pod success
Jan 25 13:02:17.103: INFO: Pod "pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:02:17.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 13:02:17.163: INFO: Waiting for pod pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:02:17.168: INFO: Pod pod-projected-secrets-e222ca7a-3f72-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:02:17.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5h5dv" for this suite.
Jan 25 13:02:23.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:02:23.346: INFO: namespace: e2e-tests-projected-5h5dv, resource: bindings, ignored listing per whitelist
Jan 25 13:02:23.448: INFO: namespace e2e-tests-projected-5h5dv deletion completed in 6.275922579s

• [SLOW TEST:19.824 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:02:23.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ede00dd4-3f72-11ea-8a8b-0242ac110006
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ede00dd4-3f72-11ea-8a8b-0242ac110006
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:02:38.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kw7pz" for this suite.
Jan 25 13:03:02.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:03:02.772: INFO: namespace: e2e-tests-projected-kw7pz, resource: bindings, ignored listing per whitelist
Jan 25 13:03:02.840: INFO: namespace e2e-tests-projected-kw7pz deletion completed in 24.567434249s

• [SLOW TEST:39.392 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:03:02.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 25 13:03:03.082: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 25 13:03:03.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:03.761: INFO: stderr: ""
Jan 25 13:03:03.761: INFO: stdout: "service/redis-slave created\n"
Jan 25 13:03:03.762: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 25 13:03:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:04.392: INFO: stderr: ""
Jan 25 13:03:04.392: INFO: stdout: "service/redis-master created\n"
Jan 25 13:03:04.393: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 25 13:03:04.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:04.878: INFO: stderr: ""
Jan 25 13:03:04.879: INFO: stdout: "service/frontend created\n"
Jan 25 13:03:04.880: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 25 13:03:04.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:05.329: INFO: stderr: ""
Jan 25 13:03:05.330: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 25 13:03:05.331: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 25 13:03:05.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:06.064: INFO: stderr: ""
Jan 25 13:03:06.064: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 25 13:03:06.066: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 25 13:03:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:07.849: INFO: stderr: ""
Jan 25 13:03:07.850: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 25 13:03:07.850: INFO: Waiting for all frontend pods to be Running.
Jan 25 13:03:42.907: INFO: Waiting for frontend to serve content.
Jan 25 13:03:43.207: INFO: Trying to add a new entry to the guestbook.
Jan 25 13:03:43.335: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 25 13:03:43.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:43.901: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:43.901: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 13:03:43.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:44.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:44.253: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 13:03:44.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:44.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:44.615: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 13:03:44.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:44.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:44.817: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 13:03:44.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:45.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:45.249: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 13:03:45.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h9vp'
Jan 25 13:03:45.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 13:03:45.436: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:03:45.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8h9vp" for this suite.
Jan 25 13:04:31.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:04:31.723: INFO: namespace: e2e-tests-kubectl-8h9vp, resource: bindings, ignored listing per whitelist
Jan 25 13:04:31.920: INFO: namespace e2e-tests-kubectl-8h9vp deletion completed in 46.457895676s

• [SLOW TEST:89.079 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:04:31.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:04:43.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-p4965" for this suite.
Jan 25 13:05:08.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:05:08.199: INFO: namespace: e2e-tests-replication-controller-p4965, resource: bindings, ignored listing per whitelist
Jan 25 13:05:08.223: INFO: namespace e2e-tests-replication-controller-p4965 deletion completed in 24.731540878s

• [SLOW TEST:36.303 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:05:08.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:05:08.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-cj84g" to be "success or failure"
Jan 25 13:05:08.673: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 52.56207ms
Jan 25 13:05:10.943: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323371935s
Jan 25 13:05:12.966: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346096378s
Jan 25 13:05:14.975: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354607819s
Jan 25 13:05:17.014: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394229994s
Jan 25 13:05:20.511: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.89063814s
Jan 25 13:05:22.584: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.96389512s
Jan 25 13:05:24.615: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 15.994984099s
Jan 25 13:05:26.663: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.043150993s
STEP: Saw pod success
Jan 25 13:05:26.664: INFO: Pod "downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:05:26.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:05:27.097: INFO: Waiting for pod downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:05:28.623: INFO: Pod downwardapi-volume-500c83fc-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:05:28.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cj84g" for this suite.
Jan 25 13:05:37.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:05:37.161: INFO: namespace: e2e-tests-projected-cj84g, resource: bindings, ignored listing per whitelist
Jan 25 13:05:37.317: INFO: namespace e2e-tests-projected-cj84g deletion completed in 8.661242318s

• [SLOW TEST:29.094 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:05:37.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:05:37.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-5fm28" to be "success or failure"
Jan 25 13:05:37.829: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 24.382943ms
Jan 25 13:05:40.279: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474985147s
Jan 25 13:05:42.297: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492533196s
Jan 25 13:05:44.471: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666735281s
Jan 25 13:05:46.490: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.685520669s
Jan 25 13:05:48.829: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.02499901s
STEP: Saw pod success
Jan 25 13:05:48.830: INFO: Pod "downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:05:48.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:05:49.280: INFO: Waiting for pod downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:05:49.475: INFO: Pod downwardapi-volume-6169a8b1-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:05:49.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5fm28" for this suite.
Jan 25 13:05:55.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:05:55.617: INFO: namespace: e2e-tests-downward-api-5fm28, resource: bindings, ignored listing per whitelist
Jan 25 13:05:55.718: INFO: namespace e2e-tests-downward-api-5fm28 deletion completed in 6.229969884s

• [SLOW TEST:18.400 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:05:55.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 13:05:55.960: INFO: Waiting up to 5m0s for pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-dpbpj" to be "success or failure"
Jan 25 13:05:56.047: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 86.760024ms
Jan 25 13:05:58.299: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339103985s
Jan 25 13:06:00.358: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398162758s
Jan 25 13:06:02.411: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450888055s
Jan 25 13:06:04.424: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.464365445s
Jan 25 13:06:07.845: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.88535959s
Jan 25 13:06:10.004: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043983513s
Jan 25 13:06:12.159: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.198607879s
Jan 25 13:06:14.203: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.242451747s
STEP: Saw pod success
Jan 25 13:06:14.203: INFO: Pod "pod-6c528e5b-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:06:14.234: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6c528e5b-3f73-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 13:06:14.674: INFO: Waiting for pod pod-6c528e5b-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:06:14.684: INFO: Pod pod-6c528e5b-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:06:14.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dpbpj" for this suite.
Jan 25 13:06:22.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:06:22.959: INFO: namespace: e2e-tests-emptydir-dpbpj, resource: bindings, ignored listing per whitelist
Jan 25 13:06:23.063: INFO: namespace e2e-tests-emptydir-dpbpj deletion completed in 8.369949117s

• [SLOW TEST:27.345 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:06:23.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 25 13:06:23.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2hh97,SelfLink:/api/v1/namespaces/e2e-tests-watch-2hh97/configmaps/e2e-watch-test-resource-version,UID:7cb677e3-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416610,Generation:0,CreationTimestamp:2020-01-25 13:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 13:06:23.729: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2hh97,SelfLink:/api/v1/namespaces/e2e-tests-watch-2hh97/configmaps/e2e-watch-test-resource-version,UID:7cb677e3-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416611,Generation:0,CreationTimestamp:2020-01-25 13:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:06:23.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2hh97" for this suite.
Jan 25 13:06:30.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:06:30.292: INFO: namespace: e2e-tests-watch-2hh97, resource: bindings, ignored listing per whitelist
Jan 25 13:06:30.292: INFO: namespace e2e-tests-watch-2hh97 deletion completed in 6.364222418s

• [SLOW TEST:7.229 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:06:30.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 25 13:06:30.687: INFO: Waiting up to 5m0s for pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-containers-zhmlq" to be "success or failure"
Jan 25 13:06:30.827: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 140.000537ms
Jan 25 13:06:32.843: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155997216s
Jan 25 13:06:34.869: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181360686s
Jan 25 13:06:36.895: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207290733s
Jan 25 13:06:39.057: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369269275s
Jan 25 13:06:41.076: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388607614s
Jan 25 13:06:43.493: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.805137383s
STEP: Saw pod success
Jan 25 13:06:43.493: INFO: Pod "client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:06:43.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 13:06:43.576: INFO: Waiting for pod client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:06:43.638: INFO: Pod client-containers-81022c7f-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:06:43.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-zhmlq" for this suite.
Jan 25 13:06:49.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:06:49.767: INFO: namespace: e2e-tests-containers-zhmlq, resource: bindings, ignored listing per whitelist
Jan 25 13:06:49.850: INFO: namespace e2e-tests-containers-zhmlq deletion completed in 6.197943286s

• [SLOW TEST:19.557 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:06:49.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:06:50.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-9882z" to be "success or failure"
Jan 25 13:06:50.067: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 19.041421ms
Jan 25 13:06:52.092: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044403194s
Jan 25 13:06:54.157: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109913045s
Jan 25 13:06:56.169: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121319673s
Jan 25 13:06:58.181: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133722212s
Jan 25 13:07:00.204: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.156205261s
Jan 25 13:07:02.217: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.169218012s
STEP: Saw pod success
Jan 25 13:07:02.217: INFO: Pod "downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:07:02.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:07:03.481: INFO: Waiting for pod downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:07:03.716: INFO: Pod downwardapi-volume-8c920d06-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:07:03.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9882z" for this suite.
Jan 25 13:07:09.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:07:10.027: INFO: namespace: e2e-tests-downward-api-9882z, resource: bindings, ignored listing per whitelist
Jan 25 13:07:10.082: INFO: namespace e2e-tests-downward-api-9882z deletion completed in 6.345402559s

• [SLOW TEST:20.231 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:07:10.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-98ac146f-3f73-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 13:07:10.641: INFO: Waiting up to 5m0s for pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-5s545" to be "success or failure"
Jan 25 13:07:10.676: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 34.856348ms
Jan 25 13:07:12.690: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048883083s
Jan 25 13:07:14.711: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070357935s
Jan 25 13:07:16.961: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319913501s
Jan 25 13:07:19.438: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797190321s
Jan 25 13:07:21.454: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.812881919s
STEP: Saw pod success
Jan 25 13:07:21.454: INFO: Pod "pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:07:21.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan 25 13:07:21.988: INFO: Waiting for pod pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:07:22.028: INFO: Pod pod-secrets-98d21fa3-3f73-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:07:22.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-5s545" for this suite.
Jan 25 13:07:28.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:07:28.261: INFO: namespace: e2e-tests-secrets-5s545, resource: bindings, ignored listing per whitelist
Jan 25 13:07:28.360: INFO: namespace e2e-tests-secrets-5s545 deletion completed in 6.31262776s
STEP: Destroying namespace "e2e-tests-secret-namespace-nhc4j" for this suite.
Jan 25 13:07:34.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:07:34.471: INFO: namespace: e2e-tests-secret-namespace-nhc4j, resource: bindings, ignored listing per whitelist
Jan 25 13:07:34.921: INFO: namespace e2e-tests-secret-namespace-nhc4j deletion completed in 6.560440986s

• [SLOW TEST:24.839 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:07:34.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 13:07:35.421: INFO: Creating deployment "nginx-deployment"
Jan 25 13:07:35.480: INFO: Waiting for observed generation 1
Jan 25 13:07:39.225: INFO: Waiting for all required pods to come up
Jan 25 13:07:39.736: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 25 13:08:30.116: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 25 13:08:30.137: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 25 13:08:30.184: INFO: Updating deployment nginx-deployment
Jan 25 13:08:30.184: INFO: Waiting for observed generation 2
Jan 25 13:08:34.147: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 25 13:08:34.163: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 25 13:08:34.169: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 25 13:08:34.958: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 25 13:08:34.958: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 25 13:08:34.962: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 25 13:08:34.979: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 25 13:08:34.979: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 25 13:08:35.215: INFO: Updating deployment nginx-deployment
Jan 25 13:08:35.216: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 25 13:08:36.392: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 25 13:08:38.529: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 25 13:08:41.903: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pmqhd/deployments/nginx-deployment,UID:a79f7fb6-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417080,Generation:3,CreationTimestamp:2020-01-25 13:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-25 13:08:36 +0000 UTC 2020-01-25 13:08:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-25 13:08:38 +0000 UTC 2020-01-25 13:07:35 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 25 13:08:43.382: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pmqhd/replicasets/nginx-deployment-5c98f8fb5,UID:c8412042-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417073,Generation:3,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a79f7fb6-3f73-11ea-a994-fa163e34d433 0xc0025f8f57 0xc0025f8f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 25 13:08:43.382: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 25 13:08:43.383: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pmqhd/replicasets/nginx-deployment-85ddf47c5d,UID:a7e03950-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417070,Generation:3,CreationTimestamp:2020-01-25 13:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a79f7fb6-3f73-11ea-a994-fa163e34d433 0xc0025f9077 0xc0025f9078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 25 13:08:43.904: INFO: Pod "nginx-deployment-5c98f8fb5-2m467" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2m467,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-2m467,UID:cc38852e-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417034,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9a27 0xc0025f9a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.905: INFO: Pod "nginx-deployment-5c98f8fb5-g4mb5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g4mb5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-g4mb5,UID:cc640e08-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417065,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9b27 0xc0025f9b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.905: INFO: Pod "nginx-deployment-5c98f8fb5-g5gbd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g5gbd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-g5gbd,UID:cc63d7be-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417056,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9c27 0xc0025f9c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.906: INFO: Pod "nginx-deployment-5c98f8fb5-h5klj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h5klj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-h5klj,UID:cc87311d-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417068,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9d27 0xc0025f9d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.906: INFO: Pod "nginx-deployment-5c98f8fb5-kzfmf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kzfmf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-kzfmf,UID:cc629ded-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417055,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9e27 0xc0025f9e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.907: INFO: Pod "nginx-deployment-5c98f8fb5-lqhhx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lqhhx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-lqhhx,UID:cc1e54a1-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417090,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc0025f9f27 0xc0025f9f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025f9f90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025f9fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.908: INFO: Pod "nginx-deployment-5c98f8fb5-mbhlc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mbhlc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-mbhlc,UID:c874ec89-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416997,Generation:0,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4077 0xc002aa4078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.908: INFO: Pod "nginx-deployment-5c98f8fb5-p7rvp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p7rvp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-p7rvp,UID:cc3952d6-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417036,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4257 0xc002aa4258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa42d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa42f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.908: INFO: Pod "nginx-deployment-5c98f8fb5-ppq2l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ppq2l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-ppq2l,UID:cc63e97a-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417060,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4367 0xc002aa4368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa43d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa43f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.909: INFO: Pod "nginx-deployment-5c98f8fb5-ptlg8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ptlg8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-ptlg8,UID:c8b8933b-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417005,Generation:0,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4467 0xc002aa4468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa44d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa44f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.910: INFO: Pod "nginx-deployment-5c98f8fb5-qzkxr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qzkxr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-qzkxr,UID:c8be5c0b-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417003,Generation:0,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa45b7 0xc002aa45b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.910: INFO: Pod "nginx-deployment-5c98f8fb5-sjlxs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sjlxs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-sjlxs,UID:c8829273-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416999,Generation:0,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4707 0xc002aa4708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.911: INFO: Pod "nginx-deployment-5c98f8fb5-wdslp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wdslp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-5c98f8fb5-wdslp,UID:c87fe11c-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417002,Generation:0,CreationTimestamp:2020-01-25 13:08:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c8412042-3f73-11ea-a994-fa163e34d433 0xc002aa4857 0xc002aa4858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa48c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa48e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.911: INFO: Pod "nginx-deployment-85ddf47c5d-444t8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-444t8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-444t8,UID:cc1fc29e-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417096,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa49a7 0xc002aa49a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.912: INFO: Pod "nginx-deployment-85ddf47c5d-4bw7g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4bw7g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-4bw7g,UID:cc38f5f8-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417037,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa4ae7 0xc002aa4ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.912: INFO: Pod "nginx-deployment-85ddf47c5d-4gpxs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4gpxs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-4gpxs,UID:a850fc03-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416921,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa4be7 0xc002aa4be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-25 13:07:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6ebc7e17b271d32b39ea9dd9fef8a7ddd9cdd6b9e2752d5454d3a49683ce8da4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.913: INFO: Pod "nginx-deployment-85ddf47c5d-6c6zv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6c6zv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-6c6zv,UID:a80d903a-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416934,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa4d37 0xc002aa4d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-25 13:07:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://438ef1c2d7b3c9a974a78e84dc2447e9d211eb4f3205526df58448aa28bdd260}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.913: INFO: Pod "nginx-deployment-85ddf47c5d-6ccqc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6ccqc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-6ccqc,UID:a80619e8-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416931,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa4e87 0xc002aa4e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa4ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa4f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-25 13:07:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0c7323428a50fd1cbe53c95c10cff1b323cb640e950cc753ab2b38d986721b24}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.914: INFO: Pod "nginx-deployment-85ddf47c5d-8nhvt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8nhvt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-8nhvt,UID:cc382710-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417033,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa4fd7 0xc002aa4fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.915: INFO: Pod "nginx-deployment-85ddf47c5d-cjskd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cjskd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-cjskd,UID:cc38c3bf-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417041,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa50d7 0xc002aa50d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.915: INFO: Pod "nginx-deployment-85ddf47c5d-dg2gp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dg2gp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-dg2gp,UID:a805ebff-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416912,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa51d7 0xc002aa51d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-25 13:07:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://736ab16e8c1abd86895e5f5b156bbbb94418548cba3aa97a6a4dc8c407570e30}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.915: INFO: Pod "nginx-deployment-85ddf47c5d-f4l7c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f4l7c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-f4l7c,UID:cc682f21-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417067,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5327 0xc002aa5328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa53b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.916: INFO: Pod "nginx-deployment-85ddf47c5d-hqhkl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hqhkl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-hqhkl,UID:cc677ff3-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417062,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5427 0xc002aa5428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa54b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.916: INFO: Pod "nginx-deployment-85ddf47c5d-hqkng" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hqkng,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-hqkng,UID:cbfc6073-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417084,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5527 0xc002aa5528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa55b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-25 13:08:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.917: INFO: Pod "nginx-deployment-85ddf47c5d-j6bzk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j6bzk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-j6bzk,UID:cc38b85d-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417035,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5667 0xc002aa5668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa56e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.917: INFO: Pod "nginx-deployment-85ddf47c5d-lpg5q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lpg5q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-lpg5q,UID:cc67c55f-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417064,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5777 0xc002aa5778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa57f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.918: INFO: Pod "nginx-deployment-85ddf47c5d-n5hcr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n5hcr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-n5hcr,UID:cc671624-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417063,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5897 0xc002aa5898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.918: INFO: Pod "nginx-deployment-85ddf47c5d-qct58" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qct58,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-qct58,UID:a850cde5-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416946,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5997 0xc002aa5998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-25 13:07:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a9a27b6c14a41fe15bd6933c664fe2467fe1f1054962946f7436e071f028e1c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.919: INFO: Pod "nginx-deployment-85ddf47c5d-qz4r6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qz4r6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-qz4r6,UID:a7e8acdf-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416882,Generation:0,CreationTimestamp:2020-01-25 13:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5af7 0xc002aa5af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-25 13:07:36 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2d5e4157ac7a1b14b5c2cb38937eaa302b0a4c253d454542a4f961ba4ddf6049}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.919: INFO: Pod "nginx-deployment-85ddf47c5d-rhwjs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rhwjs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-rhwjs,UID:cc228709-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417030,Generation:0,CreationTimestamp:2020-01-25 13:08:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5c47 0xc002aa5c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:36 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.920: INFO: Pod "nginx-deployment-85ddf47c5d-src94" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-src94,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-src94,UID:a80d5fd7-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416927,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5d47 0xc002aa5d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-25 13:07:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://490d1984aecbbfb6edfe4b451663970833837b39c43cd145f8dd8420ea868bd0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.922: INFO: Pod "nginx-deployment-85ddf47c5d-tf846" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tf846,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-tf846,UID:cc67f830-3f73-11ea-a994-fa163e34d433,ResourceVersion:19417066,Generation:0,CreationTimestamp:2020-01-25 13:08:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5e97 0xc002aa5e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa5f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa5f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 25 13:08:43.922: INFO: Pod "nginx-deployment-85ddf47c5d-wvx5c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wvx5c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-pmqhd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pmqhd/pods/nginx-deployment-85ddf47c5d-wvx5c,UID:a80da3eb-3f73-11ea-a994-fa163e34d433,ResourceVersion:19416924,Generation:0,CreationTimestamp:2020-01-25 13:07:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d a7e03950-3f73-11ea-a994-fa163e34d433 0xc002aa5f97 0xc002aa5f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ctjmq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ctjmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ctjmq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f8a020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f8a080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:08:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 13:07:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-25 13:07:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-25 13:08:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de9de3a319f9d2ef877828ea26a772218a402a56c40d74505fb946bb1fc1f390}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:08:43.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-pmqhd" for this suite.
Jan 25 13:10:46.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:10:46.303: INFO: namespace: e2e-tests-deployment-pmqhd, resource: bindings, ignored listing per whitelist
Jan 25 13:10:46.382: INFO: namespace e2e-tests-deployment-pmqhd deletion completed in 1m59.522283088s

• [SLOW TEST:191.461 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:10:46.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1afb8c4c-3f74-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume configMaps
Jan 25 13:10:49.883: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-projected-kcqlx" to be "success or failure"
Jan 25 13:10:49.908: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 24.90979ms
Jan 25 13:10:51.985: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101969279s
Jan 25 13:10:54.784: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.900367327s
Jan 25 13:10:56.812: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928354385s
Jan 25 13:10:58.876: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.993251627s
Jan 25 13:11:01.387: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.503287879s
Jan 25 13:11:03.492: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 13.609200107s
Jan 25 13:11:06.526: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 16.642385214s
Jan 25 13:11:08.543: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.660074217s
Jan 25 13:11:10.870: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 20.986667803s
Jan 25 13:11:12.976: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 23.092716436s
Jan 25 13:11:14.986: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 25.102470615s
Jan 25 13:11:17.012: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 27.128932401s
Jan 25 13:11:20.972: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 31.08877981s
Jan 25 13:11:23.271: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 33.387514967s
Jan 25 13:11:25.357: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 35.47358415s
Jan 25 13:11:27.394: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 37.511018792s
Jan 25 13:11:29.424: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 39.540481688s
Jan 25 13:11:33.105: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.222169771s
STEP: Saw pod success
Jan 25 13:11:33.105: INFO: Pod "pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:11:33.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 13:11:33.722: INFO: Waiting for pod pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:11:33.747: INFO: Pod pod-projected-configmaps-1b20b11e-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:11:33.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kcqlx" for this suite.
Jan 25 13:11:42.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:11:42.207: INFO: namespace: e2e-tests-projected-kcqlx, resource: bindings, ignored listing per whitelist
Jan 25 13:11:42.231: INFO: namespace e2e-tests-projected-kcqlx deletion completed in 8.426195347s

• [SLOW TEST:55.846 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:11:42.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 25 13:11:42.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-zfr8z" to be "success or failure"
Jan 25 13:11:42.686: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 41.581949ms
Jan 25 13:11:45.558: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913592985s
Jan 25 13:11:47.565: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.920522694s
Jan 25 13:11:49.578: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.934141299s
Jan 25 13:11:51.593: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.94913429s
Jan 25 13:11:53.738: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.094191034s
Jan 25 13:11:55.809: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.164784499s
STEP: Saw pod success
Jan 25 13:11:55.809: INFO: Pod "downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:11:55.826: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006 container client-container: 
STEP: delete the pod
Jan 25 13:11:56.306: INFO: Waiting for pod downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:11:56.327: INFO: Pod downwardapi-volume-3ae8cb31-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:11:56.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zfr8z" for this suite.
Jan 25 13:12:02.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:12:02.695: INFO: namespace: e2e-tests-downward-api-zfr8z, resource: bindings, ignored listing per whitelist
Jan 25 13:12:02.754: INFO: namespace e2e-tests-downward-api-zfr8z deletion completed in 6.408088691s

• [SLOW TEST:20.524 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:12:02.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 25 13:12:35.003: INFO: Container started at 2020-01-25 13:12:11 +0000 UTC, pod became ready at 2020-01-25 13:12:33 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:12:35.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tgn6n" for this suite.
Jan 25 13:13:01.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:13:01.281: INFO: namespace: e2e-tests-container-probe-tgn6n, resource: bindings, ignored listing per whitelist
Jan 25 13:13:01.298: INFO: namespace e2e-tests-container-probe-tgn6n deletion completed in 26.28240175s

• [SLOW TEST:58.543 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:13:01.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 25 13:13:01.693: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 13:13:01.708: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 13:13:01.712: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 25 13:13:01.732: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 25 13:13:01.732: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 25 13:13:01.732: INFO: 	Container coredns ready: true, restart count 0
Jan 25 13:13:01.732: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 25 13:13:01.732: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 13:13:01.732: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 25 13:13:01.732: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 25 13:13:01.732: INFO: 	Container weave ready: true, restart count 0
Jan 25 13:13:01.732: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 13:13:01.732: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 25 13:13:01.732: INFO: 	Container coredns ready: true, restart count 0
Jan 25 13:13:01.732: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 25 13:13:01.732: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed2369d7ed1239], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:13:02.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-s8h5t" for this suite.
Jan 25 13:13:08.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:13:09.103: INFO: namespace: e2e-tests-sched-pred-s8h5t, resource: bindings, ignored listing per whitelist
Jan 25 13:13:09.116: INFO: namespace e2e-tests-sched-pred-s8h5t deletion completed in 6.303583656s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.818 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:13:09.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 25 13:13:09.442: INFO: Waiting up to 5m0s for pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-containers-7vkt8" to be "success or failure"
Jan 25 13:13:09.476: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 33.64448ms
Jan 25 13:13:11.540: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097927078s
Jan 25 13:13:13.612: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169855313s
Jan 25 13:13:15.870: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427866114s
Jan 25 13:13:18.036: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593483344s
Jan 25 13:13:20.954: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 11.511927901s
Jan 25 13:13:22.966: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.524149063s
STEP: Saw pod success
Jan 25 13:13:22.967: INFO: Pod "client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:13:22.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 13:13:23.544: INFO: Waiting for pod client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:13:23.571: INFO: Pod client-containers-6ea4ab61-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:13:23.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-7vkt8" for this suite.
Jan 25 13:13:31.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:13:32.047: INFO: namespace: e2e-tests-containers-7vkt8, resource: bindings, ignored listing per whitelist
Jan 25 13:13:32.292: INFO: namespace e2e-tests-containers-7vkt8 deletion completed in 7.145754735s

• [SLOW TEST:23.176 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:13:32.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 25 13:13:32.634: INFO: Waiting up to 5m0s for pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-containers-2kk89" to be "success or failure"
Jan 25 13:13:32.653: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 18.889159ms
Jan 25 13:13:34.709: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074593587s
Jan 25 13:13:36.733: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098451459s
Jan 25 13:13:38.823: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188223762s
Jan 25 13:13:40.871: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236315841s
Jan 25 13:13:42.897: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.262396647s
STEP: Saw pod success
Jan 25 13:13:42.897: INFO: Pod "client-containers-7c777258-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:13:42.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7c777258-3f74-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 13:13:43.035: INFO: Waiting for pod client-containers-7c777258-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:13:43.316: INFO: Pod client-containers-7c777258-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:13:43.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-2kk89" for this suite.
Jan 25 13:13:49.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:13:49.657: INFO: namespace: e2e-tests-containers-2kk89, resource: bindings, ignored listing per whitelist
Jan 25 13:13:49.744: INFO: namespace e2e-tests-containers-2kk89 deletion completed in 6.411738926s

• [SLOW TEST:17.451 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:13:49.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-86ee15e7-3f74-11ea-8a8b-0242ac110006
STEP: Creating a pod to test consume secrets
Jan 25 13:13:50.101: INFO: Waiting up to 5m0s for pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-secrets-dc72l" to be "success or failure"
Jan 25 13:13:50.316: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 214.52354ms
Jan 25 13:13:52.753: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650792773s
Jan 25 13:13:54.763: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.661227789s
Jan 25 13:13:56.775: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673725424s
Jan 25 13:13:58.873: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.771689225s
Jan 25 13:14:00.912: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.810342918s
Jan 25 13:14:02.950: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.847767537s
STEP: Saw pod success
Jan 25 13:14:02.950: INFO: Pod "pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:14:02.952: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006 container secret-volume-test: 
STEP: delete the pod
Jan 25 13:14:02.992: INFO: Waiting for pod pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:14:03.014: INFO: Pod pod-secrets-86ef3f94-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:14:03.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dc72l" for this suite.
Jan 25 13:14:09.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:14:09.238: INFO: namespace: e2e-tests-secrets-dc72l, resource: bindings, ignored listing per whitelist
Jan 25 13:14:09.287: INFO: namespace e2e-tests-secrets-dc72l deletion completed in 6.262467418s

• [SLOW TEST:19.543 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:14:09.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-r8k9d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8k9d to expose endpoints map[]
Jan 25 13:14:09.638: INFO: Get endpoints failed (16.150649ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 25 13:14:10.655: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8k9d exposes endpoints map[] (1.033596499s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-r8k9d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8k9d to expose endpoints map[pod1:[80]]
Jan 25 13:14:14.803: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.114283238s elapsed, will retry)
Jan 25 13:14:20.865: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8k9d exposes endpoints map[pod1:[80]] (10.176146827s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-r8k9d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8k9d to expose endpoints map[pod1:[80] pod2:[80]]
Jan 25 13:14:26.392: INFO: Unexpected endpoints: found map[9334ec5e-3f74-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.514203727s elapsed, will retry)
Jan 25 13:14:32.033: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8k9d exposes endpoints map[pod1:[80] pod2:[80]] (11.155304634s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-r8k9d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8k9d to expose endpoints map[pod2:[80]]
Jan 25 13:14:33.630: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8k9d exposes endpoints map[pod2:[80]] (1.585670714s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-r8k9d
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-r8k9d to expose endpoints map[]
Jan 25 13:14:33.788: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-r8k9d exposes endpoints map[] (104.198116ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:14:34.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-r8k9d" for this suite.
Jan 25 13:14:58.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:14:58.407: INFO: namespace: e2e-tests-services-r8k9d, resource: bindings, ignored listing per whitelist
Jan 25 13:14:58.471: INFO: namespace e2e-tests-services-r8k9d deletion completed in 24.25830702s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.184 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:14:58.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 25 13:14:58.707: INFO: Waiting up to 5m0s for pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-downward-api-hgsbt" to be "success or failure"
Jan 25 13:14:58.715: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 7.908643ms
Jan 25 13:15:00.740: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033082565s
Jan 25 13:15:02.755: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047796604s
Jan 25 13:15:04.771: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064283707s
Jan 25 13:15:07.041: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333700597s
Jan 25 13:15:09.108: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40065546s
Jan 25 13:15:11.137: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.429757557s
STEP: Saw pod success
Jan 25 13:15:11.137: INFO: Pod "downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:15:11.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006 container dapi-container: 
STEP: delete the pod
Jan 25 13:15:11.305: INFO: Waiting for pod downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:15:11.316: INFO: Pod downward-api-afd5346a-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:15:11.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hgsbt" for this suite.
Jan 25 13:15:17.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:15:17.544: INFO: namespace: e2e-tests-downward-api-hgsbt, resource: bindings, ignored listing per whitelist
Jan 25 13:15:17.624: INFO: namespace e2e-tests-downward-api-hgsbt deletion completed in 6.297545714s

• [SLOW TEST:19.152 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:15:17.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 25 13:15:17.872: INFO: Waiting up to 5m0s for pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006" in namespace "e2e-tests-emptydir-hwbrc" to be "success or failure"
Jan 25 13:15:17.882: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 9.999616ms
Jan 25 13:15:19.914: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041352265s
Jan 25 13:15:21.940: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068059297s
Jan 25 13:15:24.112: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240076818s
Jan 25 13:15:26.231: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358816181s
Jan 25 13:15:28.559: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.687219051s
STEP: Saw pod success
Jan 25 13:15:28.560: INFO: Pod "pod-bb412f1c-3f74-11ea-8a8b-0242ac110006" satisfied condition "success or failure"
Jan 25 13:15:28.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bb412f1c-3f74-11ea-8a8b-0242ac110006 container test-container: 
STEP: delete the pod
Jan 25 13:15:29.024: INFO: Waiting for pod pod-bb412f1c-3f74-11ea-8a8b-0242ac110006 to disappear
Jan 25 13:15:29.278: INFO: Pod pod-bb412f1c-3f74-11ea-8a8b-0242ac110006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:15:29.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hwbrc" for this suite.
Jan 25 13:15:35.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:15:35.716: INFO: namespace: e2e-tests-emptydir-hwbrc, resource: bindings, ignored listing per whitelist
Jan 25 13:15:35.778: INFO: namespace e2e-tests-emptydir-hwbrc deletion completed in 6.473984621s

• [SLOW TEST:18.154 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:15:35.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 25 13:15:36.000: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-8l9g8" to be "success or failure"
Jan 25 13:15:36.128: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 127.801724ms
Jan 25 13:15:38.143: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143500123s
Jan 25 13:15:40.176: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175646274s
Jan 25 13:15:43.249: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.249329152s
Jan 25 13:15:45.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.264185971s
Jan 25 13:15:47.739: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.73948827s
Jan 25 13:15:49.758: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.757652361s
Jan 25 13:15:51.775: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.775323457s
STEP: Saw pod success
Jan 25 13:15:51.776: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 25 13:15:51.783: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 25 13:15:52.353: INFO: Waiting for pod pod-host-path-test to disappear
Jan 25 13:15:52.666: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:15:52.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-8l9g8" for this suite.
Jan 25 13:15:58.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:15:59.021: INFO: namespace: e2e-tests-hostpath-8l9g8, resource: bindings, ignored listing per whitelist
Jan 25 13:15:59.117: INFO: namespace e2e-tests-hostpath-8l9g8 deletion completed in 6.388513304s

• [SLOW TEST:23.337 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:15:59.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan 25 13:15:59.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 25 13:16:01.020: INFO: stderr: ""
Jan 25 13:16:01.020: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:16:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fdgt9" for this suite.
Jan 25 13:16:07.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:16:07.375: INFO: namespace: e2e-tests-kubectl-fdgt9, resource: bindings, ignored listing per whitelist
Jan 25 13:16:07.393: INFO: namespace e2e-tests-kubectl-fdgt9 deletion completed in 6.359306247s

• [SLOW TEST:8.276 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:16:07.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4bcw2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.247.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.247.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.247.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.247.73_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4bcw2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4bcw2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4bcw2.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4bcw2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.247.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.247.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.247.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.247.73_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 13:16:24.206: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.223: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.235: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2 from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.243: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2 from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.247: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.253: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.266: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.275: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.281: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.289: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.296: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.306: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006: the server could not find the requested resource (get pods dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006)
Jan 25 13:16:24.410: INFO: Lookups using e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2 wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2 wheezy_udp@dns-test-service.e2e-tests-dns-4bcw2.svc wheezy_tcp@dns-test-service.e2e-tests-dns-4bcw2.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4bcw2.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4bcw2.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Jan 25 13:16:29.689: INFO: DNS probes using e2e-tests-dns-4bcw2/dns-test-d91f0a99-3f74-11ea-8a8b-0242ac110006 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:16:30.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-4bcw2" for this suite.
Jan 25 13:16:38.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:16:38.468: INFO: namespace: e2e-tests-dns-4bcw2, resource: bindings, ignored listing per whitelist
Jan 25 13:16:38.693: INFO: namespace e2e-tests-dns-4bcw2 deletion completed in 8.388894067s

• [SLOW TEST:31.299 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 25 13:16:38.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 25 13:16:39.205: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418153,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 13:16:39.206: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418153,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 25 13:16:49.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418166,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 25 13:16:49.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418166,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 25 13:16:59.264: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418178,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 13:16:59.265: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418178,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 25 13:17:09.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418191,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 13:17:09.295: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-a,UID:ebbb9b31-3f74-11ea-a994-fa163e34d433,ResourceVersion:19418191,Generation:0,CreationTimestamp:2020-01-25 13:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 25 13:17:19.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-b,UID:03a47338-3f75-11ea-a994-fa163e34d433,ResourceVersion:19418204,Generation:0,CreationTimestamp:2020-01-25 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 13:17:19.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-b,UID:03a47338-3f75-11ea-a994-fa163e34d433,ResourceVersion:19418204,Generation:0,CreationTimestamp:2020-01-25 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 25 13:17:29.352: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-b,UID:03a47338-3f75-11ea-a994-fa163e34d433,ResourceVersion:19418217,Generation:0,CreationTimestamp:2020-01-25 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 13:17:29.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lhm6l,SelfLink:/api/v1/namespaces/e2e-tests-watch-lhm6l/configmaps/e2e-watch-test-configmap-b,UID:03a47338-3f75-11ea-a994-fa163e34d433,ResourceVersion:19418217,Generation:0,CreationTimestamp:2020-01-25 13:17:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 25 13:17:39.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-lhm6l" for this suite.
Jan 25 13:17:45.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 25 13:17:45.696: INFO: namespace: e2e-tests-watch-lhm6l, resource: bindings, ignored listing per whitelist
Jan 25 13:17:45.722: INFO: namespace e2e-tests-watch-lhm6l deletion completed in 6.225901417s

• [SLOW TEST:67.029 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSJan 25 13:17:45.722: INFO: Running AfterSuite actions on all nodes
Jan 25 13:17:45.722: INFO: Running AfterSuite actions on node 1
Jan 25 13:17:45.722: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9030.183 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS